Echo server benchmark
This document benchmarks the performance of TCP echo servers I implemented in different languages using different Redis clients. The main motivations for choosing an echo server are
-
Simple to implement and does not require expertise level in most languages.
-
I/O bound: Echo servers have very low CPU consumption in general and therefore are excellent to measure how a program handles concurrent requests.
-
It simulates very well a typical backend in regard to concurrency.
I also imposed some constraints on the implementations
-
It should be simple enough and not require writing too much code.
-
Favor the use standard idioms and avoid optimizations that require expert level.
-
Avoid the use of complex things like connection and thread pool.
To reproduce these results run one of the echo-server programs in one terminal and the echo-server-client in another.
Without Redis
First I tested a pure TCP echo server, i.e. one that sends the messages directly to the client without interacting with Redis. The result can be seen below
The tests were performed with a 1000 concurrent TCP connections on the localhost where latency is 0.07ms on average on my machine. On higher latency networks the difference among libraries is expected to decrease.
-
I expected Libuv to have similar performance to Asio and Tokio.
-
I did expect nodejs to come a little behind given it is is javascript code. Otherwise I did expect it to have similar performance to libuv since it is the framework behind it.
-
Go did surprise me: faster than nodejs and libuv!
The code used in the benchmarks can be found at
-
Asio: A variation of this Asio example.
-
Libuv: Taken from this Libuv example.
With Redis
This is similar to the echo server described above but messages are echoed by Redis and not by the echo-server itself, which acts as a proxy between the client and the Redis server. The results can be seen below
The tests were performed on a network where latency is 35ms on average, otherwise it uses the same number of TCP connections as the previous example.
As the reader can see, the Libuv and the Rust test are not depicted in the graph, the reasons are
-
redis-rs: This client comes so far behind that it can’t even be represented together with the other benchmarks without making them look insignificant. I don’t know for sure why it is so slow, I suppose it has something to do with its lack of automatic pipelining support. In fact, the more TCP connections I launch the worse its performance gets.
-
Libuv: I left it out because it would require me writing to much c code. More specifically, I would have to use hiredis and implement support for pipelines manually.
The code used in the benchmarks can be found at
Conclusion
Redis clients have to support automatic pipelining to have competitive performance. For updates to this document follow https://github.com/boostorg/redis.