Understanding Node.js Resident Set Size vs. Heap Size (OOM errors)

I'm fighting against a problem in my node server where I get an error that causes the node app to crash:

FATAL ERROR: JS Allocation failed - process out of memory

I'm using nodetime to take a look at the memory usage. I think perhaps I'm narrowing down on the problem, but I'm still pretty confused. Check out this function, which uses Mongoose to load a cached object from MongoDB:

StreamCache.prototype.loadCachedStream = function(_id, callback)
{
    this.model.findOne({'_id': _id}, {'objects':1,'last_updated':1}, function(err, d){
        callback(err, d ? d.toObject() : null);
        //The toObject() seems to cause the RSS to move into heap...?
    });
};

Notice the commented line. Prior to 11pm last night, the line was just

callback(err,d);

I added the toObject() call at 11pm last night.

Now look at my memory charts:

enter image description here

Notice that prior to this change, the RSS grew but not the heap. After the change, the heap and RSS grew exactly the same (until the app crashed). Note that the out of memory error (above) was happening both before and after the change. However, the change seems to have made the heap size correlate in its leaks to the RSS size, where before the heap was flat(ish).

My assumption is that, for some reason, this means the toObject() function moved the leaked data from RSS into the heap, so not only the RSS was leaking but also the heap.

Does that sound right?

If so... any ideas what might be causing the issue?

I think Heap/RSS correlation is irrelevant to the out of memory problem you are experiencing.

(What's the difference between the two anyways?, one is the total amount of virtual memory used, the other is what is in the physical RAM at the moment in which case it just means that the change introduced data structures that OS (OS) has decided are important to keep in the physical RAM, e.g. because they are accessed often)

What you are saying the cause of the problem is in d.toJSON() call, why do you think that toJSON() alone won't cause out of memory error?

What if the "d" object is so huge, say it's a root of the big object tree that consumes all the memory when it's getting deserialized into the JSON string?