With all the chatter about how uber-amazing Node.js is I figured I'd do a little comparison with my favorite language du jour: Go. Node's claim is that it's "a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications."
So, easy to build; fast; scalable.
Here's the canonical Node program for Hello, World from the Node home page.
var http = require('http');
And here's the equivalent program written in Go. It's a little longer because Go insists on explicitly importing the things you use and has a little more boilerplate (such as having a func main()).
So, in terms of 'easy to build' there's no clear winner. Node is a little more compact, but the core functionality is the same: start a server and do a callback when a connection is made.
So, then there's 'fast' and 'scalable'. To test those I used ab on Ubuntu on a MacBook Pro with 8GB of RAM. Here are the results.
First test was ab -n 1000000 (i.e. 1,000,000 requests):
The second test was ab -n 1000000 -c 100 (i.e. 1,000,000 requests with 100 simultaneously)
So, easy to build; fast; scalable.
Here's the canonical Node program for Hello, World from the Node home page.
var http = require('http');
http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); And here's the equivalent program written in Go. It's a little longer because Go insists on explicitly importing the things you use and has a little more boilerplate (such as having a func main()).
package main import ( "net/http" "log" "fmt" ) func main() { http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "text/plain") fmt.Fprintf(w, "Hello, World\r\n") }) log.Printf("Server running at http://127.0.0.1:1337/") http.ListenAndServe("127.0.0.1:1337", nil) } So, in terms of 'easy to build' there's no clear winner. Node is a little more compact, but the core functionality is the same: start a server and do a callback when a connection is made.
So, then there's 'fast' and 'scalable'. To test those I used ab on Ubuntu on a MacBook Pro with 8GB of RAM. Here are the results.
First test was ab -n 1000000 (i.e. 1,000,000 requests):
| Language | Elapsed time (seconds) | Requests/second | ms per request | Transfer rate (KBps) | Peak real memory (KB) | Peak virtual memory (KB) |
|---|---|---|---|---|---|---|
| Go | 137.542 | 7270.51 | 0.138 | 681.61 | 4,120 | 145,308 |
| Node | 200.341 | 4989.26 | 0.200 | 370.30 | 49,258 | 638,700 |
The second test was ab -n 1000000 -c 100 (i.e. 1,000,000 requests with 100 simultaneously)
| Language | Elapsed time (seconds) | Requests/second | ms per request | Transfer rate (KBps) | Peak real memory (KB) | Peak virtual memory (KB) |
|---|---|---|---|---|---|---|
| Go | 141.824 | 7051.02 | 0.142 | 661.00 | 21,684 | 902,884 |
| Node | 177.472 | 5634.68 | 0.177 | 418.20 | 50,724 | 643,912 |
So, Node was always slower than Go and (almost always) used more memory. The only time Go was 'worse' than Node was in virtual memory usage in the second test.
I'm unimpressed by Node. Go's approach (here it is spawning a goroutine per connection) is much simpler from a programming perspective and more performant. The code handling the connection doesn't have to be concerned about blocking/non-blocking calls or whether something is asynchronous. You just write the code to handle that particular URL.
PS I should add that I did these tests in a Ubuntu VM which was restricted to running on a single processor core. That was done so that any advantage Go would get because it can inherently use multiple cores would be eliminated. Bottom line is that Go is faster, and easy to write.
PS People have asked what happens with more simultaneous connections. Here are some graphs showing the real and virtual memory use and the requests per second for Go and Node. Go uses less real memory and serves more requests per second at 0, 100, 500 and 1,000 simultaneous requests, but Go's virtual memory grows.
I'm unimpressed by Node. Go's approach (here it is spawning a goroutine per connection) is much simpler from a programming perspective and more performant. The code handling the connection doesn't have to be concerned about blocking/non-blocking calls or whether something is asynchronous. You just write the code to handle that particular URL.
PS I should add that I did these tests in a Ubuntu VM which was restricted to running on a single processor core. That was done so that any advantage Go would get because it can inherently use multiple cores would be eliminated. Bottom line is that Go is faster, and easy to write.
PS People have asked what happens with more simultaneous connections. Here are some graphs showing the real and virtual memory use and the requests per second for Go and Node. Go uses less real memory and serves more requests per second at 0, 100, 500 and 1,000 simultaneous requests, but Go's virtual memory grows.


