blog

Benchmarking PHP5 x Node.js

Long story short: one thing we did today was thinking what would be best language/framework to build an API: it should be stable under heavy load, fast, and capable of cpu-intensive operations; we ended up with 2 alternatives: PHP5 and Node.js and decided to do a little benchmarking to find out which one would be the best. For the first test, we set up a server with virtual machines of Apache + PHP5 and another with Express + Node.js and used Siege, a stress tester, to benchmark both servers. Siege creates several connections and produces some statistics, such as number of hits, Mb transferred, transaction rate, etc. For both servers, we used 4 combinations of settings:
  1. 1 core and 1,000 concurrent users
  2. 4 cores and 1,500 concurrent users
  3. 1 core and 1,500 concurrent users
  4. 4 cores and 1,500 concurrent users
The tests consisted in a very simple task: receive the request of the user, perform a SELECT query in a database, and return the raw results back - we tried to keep the tests as similar as possible. The database used was PostgreSQL, located in another virtual machine. These are the source codes we used for the tests: ```javascript // JavaScript var express = require('express'); var pg = require('pg'); var config = { user: 'postgres', database: '...', password: '...', host: '...', max: 10, idleTimeoutMillis: 30000 }; var app = express(); var pool = new pg.Pool(config); var query = 'SELECT * FROM testtable;'; function siege(req, res, next) { pool.connect(function (err, client, done) { if (err) throw err; client.query(query, function (err, result) { done(); if (err) throw err; res.json(result.rows); }); }); } app.get('/siege', siege); app.listen(3000, function () { console.log('Example app listening on port 3000!'); }); ``` ```php // php $connection = pg_connect("host=... dbname=... user=... password=..."); $result = pg_query($connection, "SELECT * FROM testtable"); echo $result; pg_close($connection); ``` These are the results:
Result 1 core
1,000 users 1,500 users
Node.js PHP Node.js PHP*
Number of hits 39,000 4,300 2,000 -
Availability (%) 100 95 66 -
Mb. transferred 11 0.06 0.56 -
Transaction rate (t/s) 1,300 148 800 -
Concurrency 655 355 570 -
Longest transfer (s) 0.96 28.14 1.16 -
Shortest transfer (s) 0.08 0.15 0.11 -
Result 4 cores
1,000 users 1,500 users
Node.js PHP Node.js PHP*
Number of hits 55,000 5,100 14,000 -
Availability (%) 100 98 93 -
Mb. transferred 16.02 0.07 4 -
Transaction rate (t/s) 1,800 170 1,700 -
Concurrency 19.6 424 73 -
Longest transfer (s) 0.4 28.16 1 -
Shortest transfer (s) 0 0 0 -
* Aborted (too many errors) I really was expecting the opposite result, Node.js seems to be incredibly fast in comparison to PHP for these operations. For the next test, we tried to focus on cpu-intensive operations by running the following algorithm that searches for the first N prime numbers (yes, they could be optimized, but the purpose of the test was to make them cpu-intensive): ```javascript // JavaScript var express = require('express'); var app = express(); app.get('/', function (req, res) { function isPrime(num) { for (var i = 2; i < num; i++) { if (num % i === 0) { return false; } } return true; } function display(n) { var count = 0; for (var i = 3; i < n; i += 2) { if (isPrime(i)) { count++; } } console.log(count); } display(70000); res.json({}); }); app.listen(3000, function () { console.log('Example app listening on port 3000!'); }); ``` ```PHP // php function isPrime($num) { for ($i = 2; $i < $num; $i++) { if ($num % $i === 0) { return false; } } return true; } function display($n) { $count = 0; for ($i = 3; $i < $n; $i += 2) { if (isPrime($i)) { $count++; } } echo $count; } display(70000); ``` My expectations were that PHP would perform much better for this kind of tasks. These were the results:
Result 70,000 numbers 100,000 numbers
Node.js PHP Node.js PHP
Seconds 2 26 2.5 Timed-out after ~33 seconds
I don't know what to think anymore. I guess we are not using PHP.