nodejs HTTP server can't handle large response on high load

Tested on an high load server with approximately 500-600 requests per second. after hours of debugging, I ended up just with a simple HTTP server.

I noticed that when the response body was bigger then, lets say, 60k, I got this error:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Socket.EventEmitter.addListener (events.js:160:15)
    at Socket.Readable.on (_stream_readable.js:679:33)
    at Socket.EventEmitter.once (events.js:179:8)
    at TCP.onread (net.js:527:26)

And after that the CPU went like crazy

But, with the exact same code, when the response set to 10k of text, everything worked smoothly. weird...

Has anyone encountered this before??? Pleading for help.

This is the full script:

var
cluster = require('cluster'),
numCPUs = require('os').cpus().length;



if(cluster.isMaster){

    for (var i = 0; i < numCPUs; i++) cluster.fork();

    cluster.on("exit", function(worker, code, signal) {
        cluster.fork();
    });

}
else{


    var http = require('http');


    var app = function(req, res){

        res.writeHead(200, {'Content-Type': 'text/html', 'Access-Control-Allow-Origin': '*'});
        res.end( 60k_of_text___or___10k_of_text );

    };


    http.createServer(app).listen(80);


}

Right now all strings are first converted to Buffer instances. This can put a heavy load on the garbage collector to cleanup after each request. Running your application with --prof and examining the v8.log file with tools/*-tick-processor and you may see that.

There is work being done to correct this so strings are written out to memory directly then cleaned up when the request is complete. It has been implemented for file system writes in f5e13ae, but not yet for other cases (much more difficult to implement than it sounds).

Also converting strings to Buffers is very costly. Especially for utf8 strings (which are default). Where you can, definitely pre-cache the string as a Buffer and use it. Here is an example script:

var http = require('http');
var str = 'a';
for (var i = 0; i < 60000; i++)
  str += 'a';

//str = new Buffer(str, 'binary');

http.createServer(function(req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain',
                      'Access-Control-Allow-Origin': '*'});
  res.end(str);
}).listen(8011, '127.0.0.1');

And here are the results from running wrk 'http://127.0.0.1:8011/' against the server first passing str as a string, then as a persisted Buffer:

Running 10s test @ http://127.0.0.1:8011/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.00us    0.00us   0.00us    -nan%
    Req/Sec     0.00      0.00     0.00      -nan%
  8625 requests in 10.00s, 495.01MB read
Requests/sec:    862.44
Transfer/sec:     49.50MB


Running 10s test @ http://127.0.0.1:8011/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   624.07us  100.77us   4.45ms   99.17%
    Req/Sec     7.98k   729.82     9.00k    57.59%
  158711 requests in 10.00s, 8.90GB read
Requests/sec:  15871.44
Transfer/sec:      0.89GB

At the very least if you know the string your passing only contains ascii characters then replace res.end(str) with res.end(new Buffer(str, 'binary')). This will use the v8::String::WriteOneByte method which is much much faster. Here are the results using that change:

Running 10s test @ http://127.0.0.1:8011/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   827.55us  540.57us   7.03ms   97.38%
    Req/Sec     6.06k     1.11k    8.00k    85.93%
  121425 requests in 10.00s, 6.81GB read
Requests/sec:  12142.62
Transfer/sec:    696.89MB