Thursday, May 27, 2010

Node.js vs. lua microbenchmarking

Node.js surely seems to be a pretty interesting contender. At least according to the microbenchmarks (ab -c 5 -n 100000):


Document Path: /
Document Length: 12 bytes

Concurrency Level: 5
Time taken for tests: 17.526 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 7600000 bytes
HTML transferred: 1200000 bytes
Requests per second: 5705.94 [#/sec] (mean)
Time per request: 0.876 [ms] (mean)
Time per request: 0.175 [ms] (mean, across all concurrent requests)
Transfer rate: 423.49 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 1
Processing: 0 1 0.5 1 14
Waiting: 0 1 0.5 1 14
Total: 0 1 0.5 1 14

Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 2
98% 2
99% 2
100% 14 (longest request)

And here goes the Lua result:

Document Path: /
Document Length: 11 bytes

Concurrency Level: 5
Time taken for tests: 6.666 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 34600000 bytes
HTML transferred: 1100000 bytes
Requests per second: 15001.08 [#/sec] (mean)
Time per request: 0.333 [ms] (mean)
Time per request: 0.067 [ms] (mean, across all concurrent requests)
Transfer rate: 5068.72 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 4
Processing: 0 0 0.1 0 4
Waiting: 0 0 0.1 0 4
Total: 0 0 0.1 0 5

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 1
98% 1
99% 1
100% 5 (longest request)

The node.js code was as follows:

var sys = require('sys'),
http = require('http');

http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World\n');
sys.puts('Server running at');

And in its turn, the Lua script used to run the server is here: - slightly tweaked to make it run.

This is not anunfair comparison, since node.js runs a heavily optimized server, and the lua runs an off-the-shelp example, and yet outperforms Node.js almost 3x.

Though, more complicated code might result mostly in slowing down - so, the ballpark should be about right.

I will update this post if I get anything more complex on the lua side.

EDIT: in fact, a little bit of reverse engineering shows that Node.js code is also somewhat naive: "FOO /\n\n" works perfectly as a "request" for the above app. So the tests are more or less adequately show the status quo. Roughly 3x speed difference, that is.

EDIT2: has some more interesting stats.


phil pirj said...

And to compare the VM size:

node.js : 26 Mb
lua : 260 Kb

100x less

and, of course, there are lua libs that can help to dramatically reduce code size for the example

ab -c 100 could be more interesting

i'm not a lua adept, i'm working on a filtering http proxy server than should work on different platforms, and i had a lot of options:

ruby (10mb, almost impossible to run on windows by end users)
python (6mb)
java (30+ mb)

lua won with no chances for competitors, including node.js that i've been evaluating after i've started to work on the project

Andrew Yourtchenko said...

Yes. Of course, pure speed is not the only deciding factor - sometimes you'd value the possibility to reuse the code both on client and server.

Also, time-to-market for a quick prototype on the mass platforms might be shorter - since EverythingHasBeenWrittenAlready(tm).

Anyway, it is always good to have more than one tool in the toolbox - and I feel node.js could be.

re. filtering proxy: proprietary code ?

jean-s├ębastien said...

Hi Andrew, i love that kind of benchmarks, always very instructive.
As a lua newbie, i'm very interested in your modified version of the lua code available there
lua/luarocks are not so easy to configure on both OSX and Ubuntu.
And now i'm trying to execute the code but i got errors while executing the code.

Andrew Yourtchenko said...

I've posted something that is close to what I used for testing here:

Though, retesting it a bit more I have seen varying figures, so probably I should retest; but in any case the main idea was that the ballpark is the same.

If only the Node.js folks were not changing the API so much :)

phil pirj said...

I think node.js will have much more libraries this year already, even more probably than lua has. That's pity, since lua is better at almost any aspect.

No, proxy is open source, you can check it at:

Andrew Yourtchenko said...

Yes, Node.js has an advantage of using the well-known language. Also, Lua is nowhere on the browser side - and being able to run the same language on both client and the server is a big plus.

Thanks for the link to proxy. Can't see the license text - it's MIT-licensed like Lua itself ?

phil pirj said...

It's licensed under Do What The Fuck You Want To Public License:

Andrew Yourtchenko said...

Ah, thanks :-)

jean-s├ębastien said...

hi again. if you would use this script in a production environment would you use it behind a lighttpd frontend ? and then do some lua scripting ?
because i just benchmarked node.js and lighttpd serving a one-pixel image and node.js was faster.

Andrew Yourtchenko said...

I would not use this script in the production environment at all - it was merely to see the ballpark of the performance for Node.js (which I would consider *good*).

To be a good test this would need a lot more work.

If I really were to use this in something a bit closer to production, I'd not reinvent the wheel and take the HTTP parser from Mongrel (Zed Shaw did a beatiful work there I think), put it into the libev event loop, and hook the callbacks into the Lua, so to minimize the amount of interpreted code in the "hot" and "static" path.

Then I'd most probably put this construct behind nginx.

But if you really have a choice what to choose - Lua (that you do not have much experience with), or Node.js (that so far is pretty fluid API-wise): then choose neither.

Take the language/framework you know well - and if you need performance in your app, get more hardware, it's probably not all that expensive.

There are many many more aspects to the performance than pure "transactions per second" that we have in this microbenchmark.