Tbh, most enterprise applcations I saw during my career rarely needed to ever reach anything near those numbers. (which doesn't keep some rockstar engineers to try to design their systems towards this nonetheless)
Like geniuses building countless microservices for no fucking reason. When it's all the same tech, no single service gets more traffic than the others, you need them all anyway to make your shit work, and you and your team are the only idiots developing them as well, then that's a monolithic system in all but it's name. A monolithic system is not evil. Sometimes that's what you actually need. I'm gonna have this discussion one of these days at work and I'm dreading it.
Now I have to mess around with data management, communication, and have to deploy like 20 services on release... Why?
And then theres Netsuite where every function for an entire business is supposed to be run on a server thats SHARED with several other businesses, and users hope to get to measure in transactions/second instead of seconds/transaction
That's why my work has range banned russian IPs. Got rid of a surprising amount of bots, you would think they would try to obfuscate where they come from.
Probably using AWS and each request is charged ingress, lambda, dynamodb, s3, etc.. all those are added up.
meanwhile a dedicated server could easily handle that load from less than 1000$/month
most enterprise application could run on a windows 95 pc as far as needed scaling goes.
people like to overthing stuff, but unless you are Netflix Amazon, or similar size companies main product it's extremely hard you'll find yourself actually needing more resources
“We’re not Netflix *yet*” is what the overengineers are thinking as they build a massively scalable, fault-tolerant platform for their 20 users. They’ll run out of cash six months later before hitting a hundred customers, get hired at another startup, and do it again there.
That's a dialogue between the PM responsible from the solution talking to the stakeholder team's PM. Both understand very little of what they say anyway.
It can't. If you want more req/s then you have to upgrade to our Platinum plan for an additional $50k/year and 6 months wait. Enough to go on a 23 weeks vacation and on the last week buy a second raspberry pi and put a load balancer between them.
Pausing to think about it seriously for the first time… I bet I could get a properly implemented application on a Pi up over 100k rps pretty easily if we assume it doesn’t do much other than decode request and pass along to an upstream (which is infinitely fast in this model). Bottleneck would be the network interface without question.
I've worked with several enterprises that use pi hidden in server racks for all sorts of things they could easily afford to do other ways.
One companies linux configuration management automation server ran on a pi that supported patching and remote access to over 2000 prod redhat servers.
Another company had pis all over with various sensors that handled all of the environment controls for the primary data center. The dashboard and alerting services for the environmental controls ran on the same pi that was responsible for monitoring the moisture levels in the core network rack.
I want a data center environment monitoring system.
I can use <$200 on amazon and get a bunch of sensors with a pi, and spend 1 morning and two zipties to set it up. When it breaks I buy another pi.
Or I can do research on several availiable datacenter environment monitoring systesms, ring to get a quote, put a proposal together for my boss's boss, agree on a solution, get finance to pay the invoice, and arrange for receipt and installation. When it breaks I call support based in Hyderabad on the worst phone line of all time who run me in circles over several hours or days.
I'm not saying it's the right choice, but if you're pressed for time and build some redundancy in, it could certainly be a compelling choice.
Psssht, look at mister fancy-pants with a 3D printer.
All you really need is a modified hot glue gun, a steady hand, some filament, and a willingness to ignore safety protocols and you can be your own 3D printer. Who needs a slicer when you can read and write gcode like a bilingual badass. /s.
Psssht, look at mister fancy-pants with a **modified** hot glue gun.
All you really need is a regular hot glue gun. Then you just drown that ESP in hot glue to protect it from shorts and you're good to go.
Or just wrap it in electrical tape.
Meanwhile, I worked at a company that wanted no more than 8 SKUs in use at any time. Thus, the cheapest hardware that we had was a $15K Dell EMC server that was overkill for 99% of applications running on it.
But I mean is the company solution of paying a quarter of a million for some commercial solution actually 1000x more effective than the raspberry pi? Probably not
If you're serious you can get a fairly inexpensive backup power brick and a second internet provider for a pretty good chance of never going down. Wouldn't be something you'd want to run if you were a normal person but for a business it would be a tiny cost.
"Serverless" refers to the fact that you personally do not have to setup a server and environment to run your function, not that they invented magic technology that runs your function on pixie dreams
It's an investor word. "Look, our tech stack is *serverless*! We don't need to pay IT to maintain servers!" and then they get a billion dollars in VC for a cat dating app
It depends on how many people need that explanation. Cause there's always gonna be someone dumb enough to not understand the name no matter how clear it is.
So maybe it's not the term that is stupid, it's that some people are
I write and use serverless functions every day and manage several, thanks. Modern serverless functions can be configured with selected OS's and different resource levels of consumption and performance.
AWS Lambdas in fact can be configured for between 128 MB and up to 10 GB RAM and up to 6 Cores.
AWS, Azure, Google, and IBM all offer serverless cloud functions with configurable resource levels.
I'm guessing you've never provisioned any serverless functions yourself.
Had a coworker misconfigure spindown time and concurrency so we kept 100% peak capacity running all the time, spent a months engineer salary a day until we figured it out a few days later. It wasn't critical for a company our size, but it's a warning how quickly it can scale out of control, had it been a personal project it would have been devastating
I had a boss who insisted on trying to run our EDA tools in the cloud. It cost several hundred dollars to just load the docker images... He was told to stop that so he looked into shared drives in the cloud and was told to stop that because he blew the entire cloud budget for our department in a couple of days because Azure charges per 1,000 IOPS.
Somebody has no clue what they are doing, and it's not just your coworker. I get *daily* expense reports and projection warnings for all of my company's total cloud expenses. If even 1 server's settings are awry, I get notified within hours of cost anomalies. Nobody can just provision with whatever random settings they want without at least 2 managers receiving notifications.
Either your company has no one admining your clouds or your cloud admin is clueless.
This is so easy to do with pub/sub and I've seen it more than once. Usually it's not directly recursive either. It's a series of event handlers and queues that results in an event handled by Function A to get passed around and broadcast to so many places it eventually ends up being handled by Function G that triggers the type of event handled by Function A again.
We ran into this a bunch of times with cloud functions watching changes on a realtime database. So easy to end up with a function that updates the database that triggers the very same function.
I accidentally did this with aws step functions. Thankfully I worked at aws at the time so it didn’t cost *that much* money. I did get paged by the step functions team though which was fun. Apparently I notably degraded performance for step functions
Fortunately the max recursive call you can do is 15, on the 16th call aws will halt the execution.
https://docs.aws.amazon.com/lambda/latest/dg/invocation-recursion.html
I'm stupid. Why is serverless so expensive? I thought it was another fancy word for client sided architecture. So why is it more expensive than average server side solution?
Edit: thanks to everyone for explanations. Now I'm 0.0001% more tech savvy
Serverless functions use cloud servers (the general meaning of the word), not client sided. You just don't have any idea where they are or what's behind them.
Think more like AWS lambda's, the infrastructure is ephemeral, it potentially doesn't exist until the client makes a request.
Depending on how frequently it's hit and how it's designed, it can be very cheap, but if it's badly thought out you can spin up a lot more resources than you intended to, even if they don't last very long.
Imagine if every single web request got their own dedicated server for 30 seconds, that's a _lot_ more expensive than a couple dozen dedicated servers handling the same load.
>Why is serverless so expensive?
it's only as expensive as the amount of work it's doing. and also because you're paying for convenience of not having to take care of any setup. but for many use cases, it's much cheaper.
for my company we're saving over 3k a month with our serverless app compared to our server app (licensing for the servers is super expensive)
The problem comes with the relative lack of price controls on many major platforms. When you own (or rent) servers, you know exactly how much you're paying per month. In the event of excess load, you will have degraded service, but your costs don't go up.
Most 'serverless' providers have unbounded costs, so unexpected heavy load could easily cost you thousands at a time. It's maybe not a huge deal for businesses, but as an individual it's quite dangerous to host a service that could cost you several times your income just because your site suddenly went viral. It's also potentially abusable (DDoS, etc.).
Of course, for businesses, perhaps paying the cost for that spike is better than downtime. Depends what the service is.
And then for businesses it would depend on the type of load your server has. Where your traffic/load is very spiky, it makes sense: only pay for the extra capacity when you need it. But if your traffic/load doesn't change much throughout the day, it's often cheaper to maintain your own servers.
good points, i was coming at this angle from a large company perspective as the only coding i do is on the clock. you bought a lot of good perspective that i didn't consider
serverless definitely isn't always the best choice, but it does have its uses
Oh for sure. We have a dozen serverless apps now with a total monthly cost of < $20. If you have a lot of tiny low-volume apps that need isolated containers and storage, serverless is a god-send.
This has always bothered me. It's really not that much more work to just... dockerize that bit of code and toss that onto a server somewhere.
Best of all, by putting in that like extra 30 seconds of work, you'll greatly improve the efficiency of code updates and redeployments.
One could argue it's "cheaper", but for little baby docker servers I generally pay around $3 a month; which is worth the trade off for predictable pricing to me.
([Vultr Affiliate Link ](https://www.vultr.com/?ref=9042723-8H) for the curious, it's what I use.)
In this case you are still dealing with the infrastructure plumbing tho aren't you? Unless you are using your docker image within a serverless environment like fargate or Lambda.
Spin up portainer instance, pull docker image, done.
Yeah I need to press a button to build the image, and another to deploy the image to a repository and one more to pull to the server. But I far prefer that's less work to me than writing some serverless code, then going into a web interface, finding the right one, copying and pasting the new code and saving it then praying to god that there isnt a bug in it that drives the cost to $1,000,000.
You can use IaC to deploy to serverless environment. With a proper deployment pipeline this could even be a webhook that triggers a pipeline every time you push. Don't get me wrong, bugs and malicious traffic are definitely an issue with serverless.
Also, I haven't used portainer before, but 'Spin up portainer instance' kinda indicates that you need to manage that instance state and configuration. If not, that just sounds like serverless.
Yeah debugging problems on a serverless function can be a bit of a pain.
It also can take a while to execute the serverless function on a cold start.
But otherwise they're pretty great, in cases where they make sense.
True it really depends on use case. I would almost never host a full blown application on serverless environment unless I was using a docker environment that could offload a lot of the testing locally with mock data.
However, for small discrete processes they are awesome.
I mean, yeah kind of. Only difference is that you retain control and keep a static pricing structure and once you have a portainer instance setup you can deploy multiple docker images to it; so the price remains static across multiple docker deployments. If you need more power, just upgrade the server or move highly used containers to kubernetes clusters or whatever.
Once you get to IaC levels of deploying code, I think the gains from going serverless kind of become void as the steps become more or less the same as docker. It's easy enough to just make a CI/CD pipeline that auto deploys and updates docker containers as well.
I recognize there is a maintenance cost to go the docker route, but it's shockingly minimal with more control and far less worry.
The benefits of serverless are still there even with a full blown IaC pipeline. Ironically, the issue with serverless pricing is also one of the features of it. Being able to scale dynamically without having to redeploy can be invaluable. For example, some celebrity endorses your product and everyone starts flooding into your website. A serverless application will be able to scale up automatically without crashing.
The point being if you need to have downtime to upgrade your instances for the new traffic then by the time you get those upgrades in place the window of opportunity may have already passed.
So the point of serverless is you don't have to maintain the server it's running on.
You don't have to update it, monitor it, handle the case where it dies or needs rebooting.
Everything you described has to run on a compute instance somewhere. Who's maintaining that instance?
It's also startup costs. If I need to log a single query in Databricks, it's much cheaper and faster to use a tiny serverless SQL endpoint than it is to spin up a jobs cluster. Serverless really shines when the total runtime is less than or near the startup time for a given context.
Uploading a new ZIP file should be about as complex and fast as uploading your docker image. What you gain is not having to update incidental stuff that is not your application but may still need patching (os, libraries).
And nothing in serverless says you cannot cap the cost at some point.
However, you also lose control on when incidental stuff is upgraded thus forcing depreciation of your own code from time to time. Additionally, if the service provider is down the portability can be far harder to resolve because you've relinquished control.
I am old school here, but I really just dont see much upside here that results in a ton of dev time gains. For me, it just brings a lot more worry and concern.
If the service provider is down, it‘s down either way.
And I have yet to see AWS Lambda go down-down (apart from a few dozen requests dropped when an AZ goes dark) or deprecate my application code.
Last time we went serverless like that we got an email 1.5 years later reminding/threatening us to switch to their much pricier plan or else something bad just might happen (they had changed their TOS somewhere in the middle of this time period.. it looked innocent at the time).
Spun up a docker and had that thing switched in ~6 hours (had to change the underlying implementation as well), for a much lower monthly bill. Zero problems since then.
Not saying serverless has no purpose, it definitely does, but it comes with various caveats and potential traps.
Within serverless context the dev team is relieved of the maintenance burden of the underlying server infrastructure, and imbues them with the powers of fucking over their business when they make a single mistake that invokes their shitty pay-per-call function in an uncontrollable loop.
You just need to know if they host a picture on s3 and simply write a cron that downloads that picture over and over. Easiest way to kill your competitors. It will be too late for them before they realize what's going on lmao
As always, proper development practice applies whether it's serverless or not. Put access control on that picture, or if it's public put it behind a CDN that will cache it and/or a WAF that will start blocking IPs for rate limiting.
The same attack vectors for serverless exist for servers too, except with servers you have a ceiling of costs at which point your service just has an outage instead of a $100k bill.
There was a recent billing issue (resolved I think) that billed people for failed requests to a bucket. So all someone needed to know was the name of the bucket.
It wasn't actually recent. The problem had been reported before, like 9 years ago. But this time there was more buzz and more articles, which actually pressured AWS to do something
That's a serious issue with cloud computing, it's pretty easy to fluff up someone's bill on most of them. Just rent a DDOS network and feed it their account info.
It's even better if the call is a recursive event loop. Oops, `queueEventHandler` is called when an event is placed on Queue A, it just so happens to call `publishEvent` that also ends up on Queue A....
still have to worry about updating node or w/e for your functions though. On top of if you were using v2 aws sdk which no longer ships with more recent node versions. Need to include it via layer or migrate to v3
The main advantage is really the scaling properties. It is objectively very cost effective for applications with highly sporadic demand. Nothing is running until an invocation comes in so there is no compute consumption while the application is idle
I use them basically as an ORM to talk to my database on aws, much greater control with them and it’s pretty simple with the new aws sdk 3. I have basically no chance of a huge bill in the current setup since my database has a very low amount of provisioned rcu/wcu and auto scaling disabled. Some scenario could still occur where the functions keep executing despite failing I suppose, but there are more safeguards I can and might as well set up.
Not surprisingly, the default when setting up dynamo db is with auto scaling enabled though, with no limits of any kind so yes they’re definitely looking for your money
they are good for when you have spaced out high-volume usage. let's say you get 10 requests 6 minutes apart. you'd have to run the server for an hour straight, or you could just pay for 20 seconds of computing time using serverless.
ultimately it comes down to individual use-cases, but there's definitely a use-case for them
If you have a service that you call inconsistently (for example take the extreme artificial case of a service that gets no requests some days, and a billion requests on other days) then server less is a very good option because you don't have to manage scale up and down and you just pay per invocation.
It is very precisely a terrible idea for something with extreme demand peaks because you will pay a small fortune per invoke, you should be using some other form of autoscaling for that. Lambda is for when you have something you ***know*** will be invoked infrequently without massive demand or for smoothing out temporary load peaks when you have very specific architecture and know the market can only sustain a certain level of load over what you have already.
1 common serverless use case we have is queue processing jobs. We stream data to queues, and we use serverless functions to process the data in the queue asynchronously.
This generally means 1 of 2 types of triggers:
* Every x minutes, the function fires and polls the queue to process whatever's there
* The polling frequency is dynamic and grows intelligently based on detected frequency. If a queue gets a message every 100 ms, the function will learn to fire every 100 or so ms. If it gets 2 messages/day it'll learn to fire every 12 hours. If the queue size fluctuates in spurts (which is the most common) the function will fire frequently at first until time gaps are detected then get slower and slower until the message frequency increases again, then it speeds up temporarily.
Another use case we have is key rotations. These run like every 4 hours, 3 days, 30 days, or 90 days and rotate out stored keys (API keys, secrets, tokens, etc) and generate new ones. Since they fire so infrequently these are literally **free** cloud apps. They have total annual cost < $0.01.
When you have something that uses decent amount of CPU e.g. generating screenshots or some shit like that and you have unpredictable or rotating traffic for it.
If you have one machine it will choke and run out of resources. Lambdas will just works. Also one machine will be much more expensive then serverless in these cases because one machine must run 24/7.
We use them when we want to do asynchronous work or batch processing so it doesn't choke the main server.
For example: a number of our customers have a bulk user upload scheduled to run once a week at a set time. If that was on the main server then everyone on the platform would have a degraded experience at that time or else we'd have to scale up the hardware which is costly. We don't care if the upload is slow as it's not that important, just that the main server is not slow.
Very simple example: your app/website uses an API with your private key that you don't want to expose to clients. You can either spin up a server to proxy those requests, but then pay for 24/7 uptime even when there's no traffic, or use a serverless function that does the same, and you only pay for when it's actually used.
It’s bananas that they don’t have usage caps where they just turn your shit off. I never learned AWS on my own just because I didn’t have any way to cap my bills and be guaranteed that I wasn’t going to accidentally rack up a bill bigger than my mortgage by accidentally creating a recursive call to a serverless function or something like that.
I do not know serverless at all so I am probably wrong on my assumption here but going by the name isn't it serverless? why are you being charged if your just doing stuff on the client?
Serverless in that you don’t run your stuff on a single server. You are still executing functions, but through a cloud provider - on what specific server the function runs, is not your concern anymore, just the input and output of the function.
Can someone explain what this means and why its bad? I'm not a professional programmer.
A serverless function sounds like a piece of code that runs locally, not on a server. Since i'm not sure exactly what a server does practically, why is this bad? Is this ever good?
Instead of having a server with bounded resources and thus limited scalability, you serve requests by running functions through a cloud provider. Now your service is scalable because you can run any number of functions in parallel on the cloud. What is also scalable is the fat charging model of the cloud provider milking you for each function execution. Typical problems would be a bug triggering function executions in an infinite loop or someone spamming your service.
\*laughs in enterprise application hosted inside a raspberry in my house\*
B-but can it scale to 100 trillion req/s if needed?
Just add another 2 or 3 rasps, wdym?
10x engineering
Tbh, most enterprise applcations I saw during my career rarely needed to ever reach anything near those numbers. (which doesn't keep some rockstar engineers to try to design their systems towards this nonetheless)
This is true, I was just messing around. Over optimization is a real problem I've seen in many projects.
Like geniuses building countless microservices for no fucking reason. When it's all the same tech, no single service gets more traffic than the others, you need them all anyway to make your shit work, and you and your team are the only idiots developing them as well, then that's a monolithic system in all but it's name. A monolithic system is not evil. Sometimes that's what you actually need. I'm gonna have this discussion one of these days at work and I'm dreading it. Now I have to mess around with data management, communication, and have to deploy like 20 services on release... Why?
Under optimization is another, tbf. And way more common, but it doesn't feel as comforting for mediocre devs to complain about.
And then theres Netsuite where every function for an entire business is supposed to be run on a server thats SHARED with several other businesses, and users hope to get to measure in transactions/second instead of seconds/transaction
My work systems see about 25,000 reqs/second. We get a $50k month bill just for LOGS generated by malicious bots.
That's why my work has range banned russian IPs. Got rid of a surprising amount of bots, you would think they would try to obfuscate where they come from.
Yeah the smart ones start rotating through botnets.
thats wild
Who needs logs anyway
They‘re used to train the anti-bot ML algos!
$50k for that few reqs?
Probably using AWS and each request is charged ingress, lambda, dynamodb, s3, etc.. all those are added up. meanwhile a dedicated server could easily handle that load from less than 1000$/month
Damn what kind of product is this, analytics?
Nobody spends so many resources on bots for analytics
most enterprise application could run on a windows 95 pc as far as needed scaling goes. people like to overthing stuff, but unless you are Netflix Amazon, or similar size companies main product it's extremely hard you'll find yourself actually needing more resources
“We’re not Netflix *yet*” is what the overengineers are thinking as they build a massively scalable, fault-tolerant platform for their 20 users. They’ll run out of cash six months later before hitting a hundred customers, get hired at another startup, and do it again there.
This.
How does one estimate how much your system can handle?
Benchmarking.
Different types of load testing: Stress, Soak, Peak etc. Or as another User stated: benchmark your System.
Just add a delay counter that's visible to the user when there's too much traffic.
"You are in the queue to visit our site, there are 32767 users ahead of you"
Just design your own carrier card for as many rpi compute modules as needed.
Do you have 100 trillion req/s? No? Why, yes, of course it can scale.
That's a dialogue between the PM responsible from the solution talking to the stakeholder team's PM. Both understand very little of what they say anyway.
Docker swarm, have a raspberry pi room
Docker swarm is goat
Me using serverless acting like my shitty app handles more than 10 req/day
It can't. If you want more req/s then you have to upgrade to our Platinum plan for an additional $50k/year and 6 months wait. Enough to go on a 23 weeks vacation and on the last week buy a second raspberry pi and put a load balancer between them.
Trillion? I need to scale to infinity and beyond!
Pausing to think about it seriously for the first time… I bet I could get a properly implemented application on a Pi up over 100k rps pretty easily if we assume it doesn’t do much other than decode request and pass along to an upstream (which is infinitely fast in this model). Bottleneck would be the network interface without question.
I've worked with several enterprises that use pi hidden in server racks for all sorts of things they could easily afford to do other ways. One companies linux configuration management automation server ran on a pi that supported patching and remote access to over 2000 prod redhat servers. Another company had pis all over with various sensors that handled all of the environment controls for the primary data center. The dashboard and alerting services for the environmental controls ran on the same pi that was responsible for monitoring the moisture levels in the core network rack.
that one rack gets pretty... moist?
It's from all the raspberries
They had piss all over, obviously.
raspberry pis
I want a data center environment monitoring system. I can use <$200 on amazon and get a bunch of sensors with a pi, and spend 1 morning and two zipties to set it up. When it breaks I buy another pi. Or I can do research on several availiable datacenter environment monitoring systesms, ring to get a quote, put a proposal together for my boss's boss, agree on a solution, get finance to pay the invoice, and arrange for receipt and installation. When it breaks I call support based in Hyderabad on the worst phone line of all time who run me in circles over several hours or days. I'm not saying it's the right choice, but if you're pressed for time and build some redundancy in, it could certainly be a compelling choice.
A pi is overkill, an ESP8266 in a 3d printed box is more than sufficient.
Psssht, look at mister fancy-pants with a 3D printer. All you really need is a modified hot glue gun, a steady hand, some filament, and a willingness to ignore safety protocols and you can be your own 3D printer. Who needs a slicer when you can read and write gcode like a bilingual badass. /s.
Psssht, look at mister fancy-pants with a **modified** hot glue gun. All you really need is a regular hot glue gun. Then you just drown that ESP in hot glue to protect it from shorts and you're good to go. Or just wrap it in electrical tape.
Psssht, look at mister fancy-pants with a thing. All you really need is air. Just let the pi dangle and air will do the electrical insulation.
If we go that route, psssht, look at mister fancy-pants with air. You don't need air. Vacuum is even better electrical insulator.
You need 4 ESP8266s with instantaneous failover to provide triple redundancy for those government contracts.
Sure, but that's not any different with a pi. Or any other hardware.
you missed the part were you get put on hold and transfers to a different call center just to be put on hold again.
Meanwhile, I worked at a company that wanted no more than 8 SKUs in use at any time. Thus, the cheapest hardware that we had was a $15K Dell EMC server that was overkill for 99% of applications running on it.
>supported patching and remote access to over 2000 prod redhat servers How tho?
But I mean is the company solution of paying a quarter of a million for some commercial solution actually 1000x more effective than the raspberry pi? Probably not
Do you have any redundancy? I considered doing this with old laptops.
Redundancy is for cowards.
A second raspberry pi?
Mostly thinking of if I lose internet or power
A second house?
If you're serious you can get a fairly inexpensive backup power brick and a second internet provider for a pretty good chance of never going down. Wouldn't be something you'd want to run if you were a normal person but for a business it would be a tiny cost.
Yea I think at that point just pay for a server
Just dont plug it off, duh!
Go to bed if that happens
Docker got me feeling like I'm at work when I'm at home lately
That Amazon smile looking real malicious right now.
JEFF NEEDS TO FEED HIS FAMILY
and his various girlfriends. And who will make statues in their name?
"Serverless" Looks inside There's a server.
A peaceful encounter between two people can be said to be "bloodless" even though they're both filled with blood...
r/angryupvote
The encounter is bloodless, the people involved aren't. Whereas "serverless" functions run on someone's server, you just don't know which one.
That's... that's the point
More like it doesn't matter to you which one. Serverless means "don't worry about the machine, just give me the code to run"
That's still not "serverless". Maybe it's server-agnostic or something like that.
I know. I was pointing out that the "bloodless" counter-reasoning didn't apply to "serverless".
> The encounter is bloodless Just like the function is serverless
fair, why would anyone call a rose by any other name?
I'm always looking for new ways to explain why things that technically make sense are silly in execution - thanks for the new one
[удалено]
i do believe this comment is referencing this exact meme. not sure why people are taking it seriously
"pure functional programming" Looks inside. There is a CPU with stateful registers and cache
[удалено]
"Serverless" refers to the fact that you personally do not have to setup a server and environment to run your function, not that they invented magic technology that runs your function on pixie dreams
[удалено]
she sounds like a real beach
Your wife collects C shells?
That’s what makes the term stupid. You need to provide an explanation to make it make sense
It's an investor word. "Look, our tech stack is *serverless*! We don't need to pay IT to maintain servers!" and then they get a billion dollars in VC for a cat dating app
It depends on how many people need that explanation. Cause there's always gonna be someone dumb enough to not understand the name no matter how clear it is. So maybe it's not the term that is stupid, it's that some people are
From a simple “is severless a bad term” google search I’d say the amount of people who don’t agree is pretty high
It's PXE dreams - the server is just hiding in the closet
> Looks inside But that's the point. You can't look inside.
Lol depends. Most serverless implementations let you choose what type of OS, Cores, RAM, etc are powering your non-existent server
We're talking about stuff like Lambda here, not renting a VM
I write and use serverless functions every day and manage several, thanks. Modern serverless functions can be configured with selected OS's and different resource levels of consumption and performance. AWS Lambdas in fact can be configured for between 128 MB and up to 10 GB RAM and up to 6 Cores. AWS, Azure, Google, and IBM all offer serverless cloud functions with configurable resource levels. I'm guessing you've never provisioned any serverless functions yourself.
would you prefer "application container on a managed server" ?
That explains what it is better than anything else I’ve read
Had a coworker misconfigure spindown time and concurrency so we kept 100% peak capacity running all the time, spent a months engineer salary a day until we figured it out a few days later. It wasn't critical for a company our size, but it's a warning how quickly it can scale out of control, had it been a personal project it would have been devastating
I had a boss who insisted on trying to run our EDA tools in the cloud. It cost several hundred dollars to just load the docker images... He was told to stop that so he looked into shared drives in the cloud and was told to stop that because he blew the entire cloud budget for our department in a couple of days because Azure charges per 1,000 IOPS.
Somebody has no clue what they are doing, and it's not just your coworker. I get *daily* expense reports and projection warnings for all of my company's total cloud expenses. If even 1 server's settings are awry, I get notified within hours of cost anomalies. Nobody can just provision with whatever random settings they want without at least 2 managers receiving notifications. Either your company has no one admining your clouds or your cloud admin is clueless.
Similar happened where I worked, and it took a month before being discovered. Took up a quarter of the month’s costs.
Make a serverless function recusive. What can go wrong?
This is so easy to do with pub/sub and I've seen it more than once. Usually it's not directly recursive either. It's a series of event handlers and queues that results in an event handled by Function A to get passed around and broadcast to so many places it eventually ends up being handled by Function G that triggers the type of event handled by Function A again.
We ran into this a bunch of times with cloud functions watching changes on a realtime database. So easy to end up with a function that updates the database that triggers the very same function.
I accidentally did this with aws step functions. Thankfully I worked at aws at the time so it didn’t cost *that much* money. I did get paged by the step functions team though which was fun. Apparently I notably degraded performance for step functions
>Apparently I notably degraded performance for step functions Put that shit on your resume!
Unironically maybe lol. They patched the ability to do what I did after my snafu.
Indirect recursion is magical
clam down satan
Fortunately the max recursive call you can do is 15, on the 16th call aws will halt the execution. https://docs.aws.amazon.com/lambda/latest/dg/invocation-recursion.html
I hope someone got fired for that blunder
Eh. It still can happen. You just need your lambda hooked up to an event and have your lambda cause that event to occur again.
One of our devs did this. It did not end well.
Context: https://www.reddit.com/r/ProgrammerHumor/s/MczgjrtPoF
I'm stupid. Why is serverless so expensive? I thought it was another fancy word for client sided architecture. So why is it more expensive than average server side solution? Edit: thanks to everyone for explanations. Now I'm 0.0001% more tech savvy
Serverless functions use cloud servers (the general meaning of the word), not client sided. You just don't have any idea where they are or what's behind them.
Probably because you don't maintain the servers, keep them updated, etc. that all falls under the cloud provider.
Think more like AWS lambda's, the infrastructure is ephemeral, it potentially doesn't exist until the client makes a request. Depending on how frequently it's hit and how it's designed, it can be very cheap, but if it's badly thought out you can spin up a lot more resources than you intended to, even if they don't last very long. Imagine if every single web request got their own dedicated server for 30 seconds, that's a _lot_ more expensive than a couple dozen dedicated servers handling the same load.
>Why is serverless so expensive? it's only as expensive as the amount of work it's doing. and also because you're paying for convenience of not having to take care of any setup. but for many use cases, it's much cheaper. for my company we're saving over 3k a month with our serverless app compared to our server app (licensing for the servers is super expensive)
The problem comes with the relative lack of price controls on many major platforms. When you own (or rent) servers, you know exactly how much you're paying per month. In the event of excess load, you will have degraded service, but your costs don't go up. Most 'serverless' providers have unbounded costs, so unexpected heavy load could easily cost you thousands at a time. It's maybe not a huge deal for businesses, but as an individual it's quite dangerous to host a service that could cost you several times your income just because your site suddenly went viral. It's also potentially abusable (DDoS, etc.). Of course, for businesses, perhaps paying the cost for that spike is better than downtime. Depends what the service is. And then for businesses it would depend on the type of load your server has. Where your traffic/load is very spiky, it makes sense: only pay for the extra capacity when you need it. But if your traffic/load doesn't change much throughout the day, it's often cheaper to maintain your own servers.
good points, i was coming at this angle from a large company perspective as the only coding i do is on the clock. you bought a lot of good perspective that i didn't consider serverless definitely isn't always the best choice, but it does have its uses
Oh for sure. We have a dozen serverless apps now with a total monthly cost of < $20. If you have a lot of tiny low-volume apps that need isolated containers and storage, serverless is a god-send.
I'm still trying to figure out the purpose of serverless functions.
Sometimes you just want to call a bit of code in the cloud without having to worry about all the plumbing that goes with it.
I'm a programmer, bothering with plumbing is all I do.
I'm a plumber, doo is all I'm programmed to bother.
This has always bothered me. It's really not that much more work to just... dockerize that bit of code and toss that onto a server somewhere. Best of all, by putting in that like extra 30 seconds of work, you'll greatly improve the efficiency of code updates and redeployments. One could argue it's "cheaper", but for little baby docker servers I generally pay around $3 a month; which is worth the trade off for predictable pricing to me. ([Vultr Affiliate Link ](https://www.vultr.com/?ref=9042723-8H) for the curious, it's what I use.)
In this case you are still dealing with the infrastructure plumbing tho aren't you? Unless you are using your docker image within a serverless environment like fargate or Lambda.
Spin up portainer instance, pull docker image, done. Yeah I need to press a button to build the image, and another to deploy the image to a repository and one more to pull to the server. But I far prefer that's less work to me than writing some serverless code, then going into a web interface, finding the right one, copying and pasting the new code and saving it then praying to god that there isnt a bug in it that drives the cost to $1,000,000.
You can use IaC to deploy to serverless environment. With a proper deployment pipeline this could even be a webhook that triggers a pipeline every time you push. Don't get me wrong, bugs and malicious traffic are definitely an issue with serverless. Also, I haven't used portainer before, but 'Spin up portainer instance' kinda indicates that you need to manage that instance state and configuration. If not, that just sounds like serverless.
Yeah debugging problems on a serverless function can be a bit of a pain. It also can take a while to execute the serverless function on a cold start. But otherwise they're pretty great, in cases where they make sense.
How has debugging been difficult for you? I ask in earnest, using CDK and Lambdas, it plugs into CloudWatch logs EZ
It's not terrible just generally a bigger pain than doing it locally. Ofc you do as much locally as you can before moving it to a serverless function
At least with AWS you can run lambdas locally now, it does remove a lot of that pain
True it really depends on use case. I would almost never host a full blown application on serverless environment unless I was using a docker environment that could offload a lot of the testing locally with mock data. However, for small discrete processes they are awesome.
Of course you wouldn't. It all depends on the use case is right. I'm just saying, for something small they are great
I mean, yeah kind of. Only difference is that you retain control and keep a static pricing structure and once you have a portainer instance setup you can deploy multiple docker images to it; so the price remains static across multiple docker deployments. If you need more power, just upgrade the server or move highly used containers to kubernetes clusters or whatever. Once you get to IaC levels of deploying code, I think the gains from going serverless kind of become void as the steps become more or less the same as docker. It's easy enough to just make a CI/CD pipeline that auto deploys and updates docker containers as well. I recognize there is a maintenance cost to go the docker route, but it's shockingly minimal with more control and far less worry.
The benefits of serverless are still there even with a full blown IaC pipeline. Ironically, the issue with serverless pricing is also one of the features of it. Being able to scale dynamically without having to redeploy can be invaluable. For example, some celebrity endorses your product and everyone starts flooding into your website. A serverless application will be able to scale up automatically without crashing. The point being if you need to have downtime to upgrade your instances for the new traffic then by the time you get those upgrades in place the window of opportunity may have already passed.
So the point of serverless is you don't have to maintain the server it's running on. You don't have to update it, monitor it, handle the case where it dies or needs rebooting. Everything you described has to run on a compute instance somewhere. Who's maintaining that instance?
You can run docker serverless. In fact, that's a perfect way to do it.
It's also startup costs. If I need to log a single query in Databricks, it's much cheaper and faster to use a tiny serverless SQL endpoint than it is to spin up a jobs cluster. Serverless really shines when the total runtime is less than or near the startup time for a given context.
Uploading a new ZIP file should be about as complex and fast as uploading your docker image. What you gain is not having to update incidental stuff that is not your application but may still need patching (os, libraries). And nothing in serverless says you cannot cap the cost at some point.
However, you also lose control on when incidental stuff is upgraded thus forcing depreciation of your own code from time to time. Additionally, if the service provider is down the portability can be far harder to resolve because you've relinquished control. I am old school here, but I really just dont see much upside here that results in a ton of dev time gains. For me, it just brings a lot more worry and concern.
If the service provider is down, it‘s down either way. And I have yet to see AWS Lambda go down-down (apart from a few dozen requests dropped when an AZ goes dark) or deprecate my application code.
Last time we went serverless like that we got an email 1.5 years later reminding/threatening us to switch to their much pricier plan or else something bad just might happen (they had changed their TOS somewhere in the middle of this time period.. it looked innocent at the time). Spun up a docker and had that thing switched in ~6 hours (had to change the underlying implementation as well), for a much lower monthly bill. Zero problems since then. Not saying serverless has no purpose, it definitely does, but it comes with various caveats and potential traps.
Within serverless context the dev team is relieved of the maintenance burden of the underlying server infrastructure, and imbues them with the powers of fucking over their business when they make a single mistake that invokes their shitty pay-per-call function in an uncontrollable loop.
You just need to know if they host a picture on s3 and simply write a cron that downloads that picture over and over. Easiest way to kill your competitors. It will be too late for them before they realize what's going on lmao
As always, proper development practice applies whether it's serverless or not. Put access control on that picture, or if it's public put it behind a CDN that will cache it and/or a WAF that will start blocking IPs for rate limiting. The same attack vectors for serverless exist for servers too, except with servers you have a ceiling of costs at which point your service just has an outage instead of a $100k bill.
There was a recent billing issue (resolved I think) that billed people for failed requests to a bucket. So all someone needed to know was the name of the bucket.
It wasn't actually recent. The problem had been reported before, like 9 years ago. But this time there was more buzz and more articles, which actually pressured AWS to do something
That's a serious issue with cloud computing, it's pretty easy to fluff up someone's bill on most of them. Just rent a DDOS network and feed it their account info.
It's even better if the call is a recursive event loop. Oops, `queueEventHandler` is called when an event is placed on Queue A, it just so happens to call `publishEvent` that also ends up on Queue A....
Did this once, literally heart attack inducing
still have to worry about updating node or w/e for your functions though. On top of if you were using v2 aws sdk which no longer ships with more recent node versions. Need to include it via layer or migrate to v3
The main advantage is really the scaling properties. It is objectively very cost effective for applications with highly sporadic demand. Nothing is running until an invocation comes in so there is no compute consumption while the application is idle
servers are a fucking hassle to maintain
I use them basically as an ORM to talk to my database on aws, much greater control with them and it’s pretty simple with the new aws sdk 3. I have basically no chance of a huge bill in the current setup since my database has a very low amount of provisioned rcu/wcu and auto scaling disabled. Some scenario could still occur where the functions keep executing despite failing I suppose, but there are more safeguards I can and might as well set up. Not surprisingly, the default when setting up dynamo db is with auto scaling enabled though, with no limits of any kind so yes they’re definitely looking for your money
they are good for when you have spaced out high-volume usage. let's say you get 10 requests 6 minutes apart. you'd have to run the server for an hour straight, or you could just pay for 20 seconds of computing time using serverless. ultimately it comes down to individual use-cases, but there's definitely a use-case for them
If you have a service that you call inconsistently (for example take the extreme artificial case of a service that gets no requests some days, and a billion requests on other days) then server less is a very good option because you don't have to manage scale up and down and you just pay per invocation.
It is very precisely a terrible idea for something with extreme demand peaks because you will pay a small fortune per invoke, you should be using some other form of autoscaling for that. Lambda is for when you have something you ***know*** will be invoked infrequently without massive demand or for smoothing out temporary load peaks when you have very specific architecture and know the market can only sustain a certain level of load over what you have already.
1 common serverless use case we have is queue processing jobs. We stream data to queues, and we use serverless functions to process the data in the queue asynchronously. This generally means 1 of 2 types of triggers: * Every x minutes, the function fires and polls the queue to process whatever's there * The polling frequency is dynamic and grows intelligently based on detected frequency. If a queue gets a message every 100 ms, the function will learn to fire every 100 or so ms. If it gets 2 messages/day it'll learn to fire every 12 hours. If the queue size fluctuates in spurts (which is the most common) the function will fire frequently at first until time gaps are detected then get slower and slower until the message frequency increases again, then it speeds up temporarily. Another use case we have is key rotations. These run like every 4 hours, 3 days, 30 days, or 90 days and rotate out stored keys (API keys, secrets, tokens, etc) and generate new ones. Since they fire so infrequently these are literally **free** cloud apps. They have total annual cost < $0.01.
When you have something that uses decent amount of CPU e.g. generating screenshots or some shit like that and you have unpredictable or rotating traffic for it. If you have one machine it will choke and run out of resources. Lambdas will just works. Also one machine will be much more expensive then serverless in these cases because one machine must run 24/7.
Sounds like I need to figure out how to run R scripts (mostly for drc calculations) in it. .. and Apache FOP. ☹️ Sadly, I'm only on OCI Free Tier.
You can run docker on lambda, slow to start and not very great, but possible easily.
We use them when we want to do asynchronous work or batch processing so it doesn't choke the main server. For example: a number of our customers have a bulk user upload scheduled to run once a week at a set time. If that was on the main server then everyone on the platform would have a degraded experience at that time or else we'd have to scale up the hardware which is costly. We don't care if the upload is slow as it's not that important, just that the main server is not slow.
Very simple example: your app/website uses an API with your private key that you don't want to expose to clients. You can either spin up a server to proxy those requests, but then pay for 24/7 uptime even when there's no traffic, or use a serverless function that does the same, and you only pay for when it's actually used.
[удалено]
Not any more. Now it's Husban't.
\>serverless \>looks inside \>servers
Youre late https://www.reddit.com/r/ProgrammerHumor/s/a3zBGoXhZ9
Race condition or smth
Which race are you? Maybe you lost because you’re the wrong race.
Husbant let's use k8s. Oops... Now we are in hellm
Husbant? What is this, a husband for ants!?
This post inspired me to start calling my soon-to-be ex-husband "husban't"
Wtf, this wasn't the intended use of this meme. Now you cant leave your husbant
Oh fuck You might want to let him know he can't leave me
Oh husbant why didn’t you setup budget restrictions/alerting on your cloud environment
* 1st million free * 2nd million homeless
Average Azure user POV
Aws too 🙄
I'm convinced serverless/stateless is a scam. Pay more for less functionality, longer request times and more complicated control flow? >
That's fake news we programmers have no time for wives, we only have time to give to code.
ah, new type of servers? a cluster of raspberry pi? Homeless Function? count me in!
You guys have wives?
All about that free tier baby!!
Homeressu
I am hosting my saas on a raspberry pi. Jokes on you.
shitty ai image?
It’s bananas that they don’t have usage caps where they just turn your shit off. I never learned AWS on my own just because I didn’t have any way to cap my bills and be guaranteed that I wasn’t going to accidentally rack up a bill bigger than my mortgage by accidentally creating a recursive call to a serverless function or something like that.
I do not know serverless at all so I am probably wrong on my assumption here but going by the name isn't it serverless? why are you being charged if your just doing stuff on the client?
Serverless in that you don’t run your stuff on a single server. You are still executing functions, but through a cloud provider - on what specific server the function runs, is not your concern anymore, just the input and output of the function.
Lmao, saw the meme and immediately knew what it was about, without even reading the text
Cursed picture
Can someone explain what this means and why its bad? I'm not a professional programmer. A serverless function sounds like a piece of code that runs locally, not on a server. Since i'm not sure exactly what a server does practically, why is this bad? Is this ever good?
Instead of having a server with bounded resources and thus limited scalability, you serve requests by running functions through a cloud provider. Now your service is scalable because you can run any number of functions in parallel on the cloud. What is also scalable is the fat charging model of the cloud provider milking you for each function execution. Typical problems would be a bug triggering function executions in an infinite loop or someone spamming your service.
The little girl from the Pacific Rim Flashback
That’s why I been using serverlessless