Differences between Node.js and Erlang

 

Differences between Node.js and Erlang

原文在墙外:http://jlouisramblings.blogspot.com/2010/12/differences-between-nodejs-and-erlang_14.html

 

Suppose we have a canonical ping/pong server written in Node,
var sys = require("sys"); var http = require("http"); http.createServer(function (req, res) { res.writeHead(200, {"Content-Type": "text/plain"}); res.end("Hello, World\n"); }).listen(8124, "127.0.0.1"); sys.puts("Server running at http://localhost:8124"); 
We can run this server easily and test it from the command line:
jlouis@illithid:~$ curl http://localhost:8124 Hello, World 
And it does what we expect. Now suppose we do something silly. We make a tiny change to the Javascript code:
var sys = require("sys"); var http = require("http"); http.createServer(function (req, res) { res.writeHead(200, {"Content-Type": "text/plain"}); res.end("Hello, World\n"); while(true) { // Do nothing } }).listen(8124, "127.0.0.1"); sys.puts("Server running at http://localhost:8124"); 
Now, the first invocation of our test works, but the second hangs:
jlouis@illithid:~$ curl -m 10 http://localhost:8124 Hello, World jlouis@illithid:~$ curl -m 10 http://localhost:8124 curl: (28) Operation timed out after 10001 milliseconds with 0 bytes received 
This should not surprise anybody. What we have here illustrated should be a common knowledge. Namely that Node is not preemptively multitasking but is asking each event to cooperate by yielding to the next one in turn.
The example was silly. Now, suppose we have a more realistic example where we do work, but it completes:
var sys = require("sys"); var http = require("http"); http.createServer(function (req, res) { res.writeHead(200, {"Content-Type": "text/plain"}); x = 0; while(x < 100000000) { // Do nothing x++; } res.end("Hello, World " + x + "\n"); }).listen(8124, "127.0.0.1"); sys.puts("Server running at http://localhost:8124"); 
We introduce a loop which does some real work. And then we arrange for it to be non-dead by requiring it in the output. Our server will now still return, but it will take some time before it does so.
Let us siege the server:
jlouis@illithid:~$ siege -l -t3M http://localhost:8124 ** SIEGE 2.69 ** Preparing 15 concurrent users for battle. The server is now under siege... [..] 
For three minutes, we hammer the server and get a CSV file, which we can then load into R and process.

 

ERLANG ENTERS…

For comparison, we take Mochiweb, an Erlang webserver. We do not choose it specifically for its speed or its behaviour. We choose it simply because it is written in Erlang and it will context switch preemptively.
The relevant part of the Mochiweb internals are this:
count(X, 0) -> X; count(X, N) -> count(X+1, N-1). loop(Req, _DocRoot) -> "/" ++ Path = Req:get(path), try case Req:get(method) of Method when Method =:= 'GET' -> X = count(0, 100000000), Req:respond({200, [], ["Hello, World ", integer_to_list(X), "\n"]}); [..] 
It should be pretty straightforward. We implement the counter as a tail-recursive loop and we force its calculation by requesting it to be part of the output.
erl -pa deps/mochiweb/ebin -pa ebin Erlang R14B02 (erts-5.8.3) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] 1> application:start(erlang_test). {error,{not_started,crypto}} 2> application:start(crypto). ok 3> application:start(erlang_test). ** Found 0 name clashes in code paths ok 4> 
Notice that we get both my CPUs to work here automatically. But performance is not the point I want to make.
Again, we lay siege to this system:
jlouis@illithid:~$ siege -l -t3M http://localhost:8080 | tee erlang.log 

ENTER R

We can take these data and load them into R for visualization:
> a <- read.csv("erlang.log", header=FALSE); > b <- read.csv("node.js.log", header=FALSE); > png(file="density.png") > plot(density(b$V3), col="blue", xlim=c(0,40), ylim=c(0, 0.35)); lines(density(a$V3), col="green") > dev.off() > png("boxplot.png") > boxplot(cbind(a$V3, b$V3)) > dev.off() 

DISCUSSION

What have we seen here? We have a situation where Node.js has a much more erratic response time than Erlang. We see that while some Node.js responses complete very fast (a little more than one second) there are also responses which take 29.5 seconds to complete. The summary of the data is here for Node.js:
> summary(b$V3) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.040 6.328 13.580 13.940 20.940 29.590 
And for Erlang:
> summary(a$V3) Min. 1st Qu. Median Mean 3rd Qu. Max. 9.87 11.21 12.24 12.21 13.16 15.32 

The densities are (green is Erlang, blue is Node.js)
density plot

And for completion, a boxplot:
boxplot


This is a result of Erlang preemptively multitasking the different processes so its responses all come around the same time. You can’t really use the mean for anything: Erlang ran 2 CPUs whereas Node.js only ran one. But the kernel density plot clearly shows how Erlang stably responds while the response times of Node.js is erratic.
Does this mean Node.js is bad? No! Most node.js programs will not blindly loop like this. They will call into a database, make another web request or the like. When they do this, they will allow other requests to be processed in the event loop and the thing we have seen here is nonexistent. It does however show that if a Node.js request is expensive in the processing, it will block other requests from getting served. Contrast this with Erlang, where cheap requests will get through instantly as soon we switch context preemptively.
It also hints you need to carry out histogram plots for your services (kernel density plots are especially nice for showing how the observations spread out). You may be serving all requests, but how long time does it take to serve the slowest one? A user might not want to wait 30 seconds on a result, but he may accept 10 seconds.

 

CONCLUSION

My main goal was to set out and exemplify a major difference in how a system like Node.js handles requests compared to Erlang. I think I have succeeded. It underpins the idea that you need to solve problems depending on platform. In Node.js, you will need to break up long-running jobs manually to give others a chance at the CPU (this is essentially cooperative multitasking). In Erlang, this is not a problem — and a single bad process can’t hose the system as a whole. On the other hand, I am sure there are problems for which Node.js shines and it will have to be worked around in Erlang.