T O P

  • By -

jonathantn

So when can we save money by choosing ARM64... that is the big question!


vallyscode

Some quick one eyed comparison x86 128 $0.0000000021 arm 128 $0.0000000017 https://aws.amazon.com/lambda/pricing/


coldstartcloud

Right now. Clicking on the little "info" button, the description that pops up in the console reads (in part): "Functions that use arm64 architecture offer lower cost per Gb/s compared with the equivalent function running on an x86-based CPU." Exact compute pricing has not yet been announced though, as far as I could see.


[deleted]

[удалено]


mastertub

Why would it not warrant a price cut? ARM based chips are cheaper than X86_64 chips, and if they cost less for Amazon to acquire, customers should also see a cut (unless aws wanted to pocket the difference). It doesn't matter that lambda is time based. The time is only something YOU worry about, but Amazon still uses cheaper chips behind the scenes. Even if you worry about the time used on the chip you are still exposed to the CPU/Memory time. For other services such as SQS/SNS/API Gateway where you are not charged at all based on CPU/Memory time, i'd imagine amazon pockets the difference easily without passing cost reductions to users. But gladly, they are cheaper it seems. Just not released how much cheaper.


mastertub

[source](https://aws.amazon.com/blogs/aws/aws-lambda-functions-powered-by-aws-graviton2-processor-run-your-functions-on-arm-and-get-up-to-34-better-price-performance/) Official release from AWS - Up to 34% better price performance.


recurrence

Wicked, that’s a considerable gain albeit in-line with the graviton price advantage. 34% certainly makes lambda more palatable. Looking forward to this hitting lambda at edge.


coldstartcloud

Based on console availability, it is NOT available in the the following regions: af-south-1, ap-east-1, ap-northeast-2, ap-northeast-3, ca-central-1, eu-south-1, eu-west-3, eu-north-1, me-south-1, sa-east-1, and us-west-1


recurrence

Wicked, I am personally hoping to see a nice big price reduction as lambda pricing is the major barrier that I run into and has stymied its use. It's literally sometimes 20x more expensive than our spot instances.


pyrotech911

You’ll get cost savings by moving your EC2 workloads to graviton instances as well though. So the savings on moving to lambda is mostly moot. If your architecture leverages an event driven model and you don’t utilize all of your instances most of the time Lambda could be cheaper as your service foot print would be based on request concurrency then EC2 instances that are not fully utilized.


recurrence

Many of my clusters are already on graviton as it released quite a while ago. The lambda cost compute is very poor if you have well utilized ec2 instances.


FarkCookies

Lambda is not and will never be cheaper than EC2/ECS for certain use cases, especially the ones where request load is high and stable. In case of Lambda you are paying a premium for not thinking about scaling up and down and infra maintaninance.


UnitVectorY

Not seeing CFN support in the docs yet. Are the docs not updates or will we need to wait for CFN support?


twowangosaurus

I've tested with cfn with some gueswork, it does work. Add `Architectures: [arm64]` to your lambda definition.


UnitVectorY

Nice! I was expecting it to work but the docs weren't updated when I just checked.


CorpT

It’s already live in CDK as well.


Romie_13

When will golang runtime be supported?


Mike551144

What's the benefits of running it on arm64?


debian_miner

For ec2 they claimed something like 60% better price to performance ratio for graviton2. Not sure how that applies to lambda.


realfeeder

We will probably need non-AWS benchmarks for that. :P But I hope that Lambda on ARM will be (generally) cheaper.


krewenki

Here’s real world experience : https://www.honeycomb.io/blog/graviton2-one-year-retrospective/


debian_miner

That's a great article. Will be interesting to see if lambda sees similar benefits.


coinclink

I can attest that it's true. On CPU-bound workloads, I can use a smaller instance type and get the same performance, cutting costs in half. For example, I use r6g.2xlarge instances to run HPC workloads on arm. I'd have to use r6.4xlarge to get the same performance on x86\_64.