Following is a stitch-up of the CPU consumption graphs over multiple runs of a simple script. I am intrigued by the variability of the CPU consumption graphs over short periods of time. Does anybody have an idea what may be causing these curves to change so dramatically within few minutes' span?
The driver script to make the node process hog one CPU at a time:
$ for (( i = 0; i < 8; ++i )) ; do echo CPU: $i; taskset -c $i node ticks_per_second.js; done
The script: Node Ticks per Second Node version: 0.10.8 (installed using NVM) OS: Ubuntu 12.04 Hardware: MacBook Pro 9,1
This was an exercise to see the theoretical limit of how many events I can generate/process from a single NodeJS process.
PS: I understand what kinds of tasks NodeJS is good at (I/O) and which it is not good at (CPU), so please suppress the urge to discuss those aspects. I am looking for advice to make NodeJS perform predictably.

Turns out that the Gnome System Monitor is retarded!!
(Note: In the following screenshots, the upper graph is made by the KSysGuard, and the lower graph is from Gnome System Monitor.)
The update interval has to be set to '10' seconds, just so that System Monitor will move the graph every 1 second. (see screenshot 1)
When the update interval is set to 1 second, the graph moves just too fast!! (See screenshot 2)
The KSysGuard is much more responsive, and updates the graph at precisely 1 second when asked to do so. (see screenshot 1).
Thankfully, the KSysGuard package does not have any dependency on the rest of the KDE system, so installing it installed just the GUI and the ksysguardd daemon, and caused no unnecessary bloat.
Bottom-line: Don't use Gnome System Monitor, and use KSysGuard as it does the right thing, and is very flexible.

