I made the same program to test performance on Nodejs and C++ on Mac OS X.
First in C++:
#include <iostream>
#include <time.h>
using namespace std;
int main() {
clock_t t1, t2;
cout << "Initializing\n";
t1 = clock();
double m = 0;
for (double i = 0; i != 10000000000; ++i) {
m = i * -1 + i;
}
t2 = clock();
float diff = (((float) t2 - (float) t1) / 1000000.0F) * 1000;
cout << "Finalizing with " << diff << "ms\n";
}
Second in Nodejs:
console.log("Initializing");
t1 = Date.now();
var m = 0;
for (var i = 0; i != 10000000000; i++) {
m = i * -1 + i;
}
t2 = Date.now();
var diff = t2 - t1;
console.log("Finalizing with %dms", diff);
The result was 50000ms for C++ and 22000 for Nodejs.
Why Nodejs is faster for that kind of operation?
Thanks.
UPDATE:
Switching double and using long int, it gave me 22000ms, like Nodejs.
The problem is that the code for 2 languages is not equivalent. In C++ you used double and in javascript variable was optimized to be integers (although their type is Number which in general case is floating-point type). And, of course, floating point operations are always longer than operations on integers.
Try to replace double with int or better with long in the C++ version. This will ensure you have integers in both versions.
If you do that please consider posting results for us to see the difference. T
It is quite difficult to measure performance straight off using this type of code. Both the C++ compiler and the V8 JITter uses different types of optimization of the generated native code.
A few things to look out for:
i != 10000000000 is dangerous. Always compare doubles using inequalities (<>) rather than equalities (==, !=).long long type instead. The thing is, NodeJS may actually do this optimization automatically, since it is dynamically typed.m anywhere. If you compile with g++ -O3, the compiler may actually optimize away the whole loop (try it!).