Go vs. Rust (AWS Lambda) performance benchmark (2023)

preview_player
Показать описание

▬▬▬▬▬ Experience & Location 💼 ▬▬▬▬▬
► I’m a Senior Software Engineer at Juniper Networks (12+ years of experience)
► Located in San Francisco Bay Area, CA (US citizen)

▬▬▬▬▬▬ Connect with me 👋 ▬▬▬▬▬▬

▬▬▬▬▬▬ Related videos 👨‍🏫 ▬▬▬▬▬▬

=========
Source Code

#AWS #Lambda #DevOps
Рекомендации по теме
Комментарии
Автор

I think the latency comes from cold start on lambda? Also we're not really benchmarking rust, we're just calling few AWS apis from rust, so this is basically an AWS api benchmark using rust and Go? I think to actually benchmark rust and go, you'll have to devise a CPU heavy task such as image resizing.

Sanjaymittal
Автор

As others pointed out, your test is testing the interaction of S3. I'd rather check something like:
- Randomly generate a file of 10MB
- Compress the file with the same algorithm and compression level.
- Get a SHA256 hash of the file
- Done. Just discard the file.

Rinse and repeat a 1000x.

This is also somewhat vague because it tests just a couple of aspects, that might be poorly written in the language. Yet it doesn't rely on external factors other than file upload. Also, your lambda will run on a shared CPU and memory resource which might unevenly thrashed during your test. Even if your test doesn't rely on much I/O, the memory bus is shared among the CPUs and CPU cores.

When the tests are this close to each other on a shared resource, I'd just call it a draw.

JanosFeher
Автор

1. Use --release flag to compile release build (30x faster)
2. Use strip command to strip your rust executable for symbols
3. Enable link time optimizations (8.3Mb)
But by the end of the day it's not big difference because your program waiting for io.
And rust waits as fast as go😁

guest
Автор

Interesting results. We have a lambda that checks S3 directories (~20 000 calls to S3 per lambda run) and then updates Glue Data Catalog. And we were choosing between Rust and Go implementations. Rust implementation showed almost 2 times better results (billed time) with small lambda containers like 128MB, and almost no difference with 2GB+ lambda containers. The most important metric for us is - GB/s which results in money, and with Rust we are able to safe almost 2 times.

stasnorochevskiy
Автор

your benchmarks spent more time waiting for IO calls in dynamodb and s3 storage, this is why results are so similar. In fact aws lambda function in this case is more about LP cold startup time then other aspects.

jstackoverflow
Автор

I have done numerous tests with Lambda written in C++, but honestly I have not found any performance advantages. IMHO: Go remains the ideal "language" for Lambdas. I also used net core, but it has the big problem of Cold Start.

az
Автор

If i had to guess, its either because the bottleneck is not the language, but maybe Roundtrip (for ex), OR its because the 200ms latency is the absolute peak performace that any runtime can perform at, and the little boost Go has over rust is due to less rigourous error checking.

I think the first one sounds more plausible, since we arent actually performing any language specific tasks, we are just performing get and post

pierce
Автор

This is another excellent analysis, this is a great video!!!! Thanks!!!

GabrielPozo
Автор

I mean, Go has a pretty slim runtime. The difference isn't that large. Interestingly Rust wins in terms of performance reliability when compared to the Go lambda runtime, but when you compiled your Go code to native it also won there.

I feel like the most likely reason Rust lost is the Tokio Runtime and it's scheduler. It's likely made some choices that might work well in a long running server but don't work as well for Lambdas.

Perhaps you can change the scheduler, or try another runtime...

On another note, can one take other performance characteristics from Lambdas? I'd be interested how they otherwise compared in resource usage, and also how they scale with request count etc.

SMTM
Автор

go is so powerful, i found it very minimal and use low memory and cpu, now i want to use go lang as for my new projects...

musiclife
Автор

but if you compare 95th percentile then go loses by about 20ms, depending on the load percentile might be a better indicator rather than mean

Berkeli
Автор

most areas where rust beats go is when go runs garbage collection on long running applications, like web servers handling frequent events and cashing. I'm assuming even if this was compute bound and not io bound it would still be close (assuming a short enough runtime for the function call)

arimill
Автор

I wonder if you ran the load test from EC2 if you would get different results? Unless you are very close to the datacenter, there may be too much variance along the network from your local host to AWS.

hypergraphic
Автор

There is no point to compare the languages when both are quite faster than other languages. These are the criteria can influence when choosing between go or rust for Cloud Native (for most of the cases).
1. Safety on RunTIme - safety of Rust or Easiness of Go routine while writing concurrent, parallel code
2. Language Philosophy - Do you like simplicity of Go with lot of boiler plate if nil statement or Borrow Checker and Result based Error handling of Rust
3. Human resource availability - Is it easy to find and Onboard/replace a developer when needed
Check these 3 boxes for Go and Rust and you get your answer which language you should use for your Org.

mrdatapsycho
Автор

а чего код в горутине не запустил? было бы еще быстрей) а если паттерн синглфлайт применить, будет еще быстрей)))

sandrynin
Автор

This is a test for s3 and dynamic db, not too much matters about golang and rust

red
Автор

The difference may be simply in the AWS lib used for both languages, not in your code. But some comments here are really interesting.

viniciusvbf
Автор

Good short video
But will be nice if you add more calculations not only IO I thought IO output will be almost the same for lean language

napatyimjan
Автор

people saying that the difference is down to cold starts, i'm not so sure. maybe the first one or two requests is a cold start, but then aren't they gonna be warm starts after that?

brianevans
Автор

Can you add another test with GraalVM and native build - maybe with and without Quarkus? This seems extremely fast. At least in my short evaluations.

korbendallasmultipass
welcome to shbcf.ru