My main question is the following:
is information hidding going to hurt perfomance(Both CPU/Memory) when you have a frequently-accessed function (returning an object) that you want people to interact with?
Or is this considered micro-optimization especially when using a platform like for example node.js(long running process). Should I consider performance over information hidding or not?
By reading Douglas Crockford's article about information hidding I know I can add private members and privileged methods. But according to this quote from Douglas Crockford the prototype mechanism helps you conserve memory:
When a member is sought and it isn't found in the object itself, then it is taken from the object's constructor's prototype member. The prototype mechanism is used for inheritance. It also conserves memory.
When only memory is saved then I guess this might not be such a big deal because memory is getting cheaper by the second. But according to this article from John Resig it is also faster(CPU) to add a lot of properties the prototype chain.
Thus, if you have a frequently-accessed function (returning an object) that you want people to interact with, then it's to your advantage to have the object properties be in the prototype chain and instantiate it.
Instantiating a function with a bunch of prototype properties is very, very, fast. It completely blows the Module pattern, and similar, out of the water. Thus, if you have a frequently-accessed function (returning an object) that you want people to interact with, then it's to your advantage to have the object properties be in the prototype chain and instantiate it.
// Very fast function User(){} User.prototype = { /* Lots of properties ... */ }; // Very slow function User(){ return { /* Lots of properties */ }; }
This also might not be such a big problem because CPU's are getting much faster(Moore's Law). Furthermore because the Javascript engines made incredible progress performance-wise I am wondering if I should even consider this or just use information hidding.
I also have this sub-question I guess:
How would I effectively measure both memory and CPU using a platform like for example node.js?
This sure sounds to me like you're trying to over optimize something that rarely needs optimizing. Wirte your cost first to be reliable, second to be maintainable by you or others and then optimize only the things that really need optimizing after you get proof that it needs optimizing.
First off, making things truly private adds complication. You should likely use a member variable unless you really, really need the privacy.
Second off, closures to implement privacy do come at some sort of memory consumption cost. The amount of memory is small per use so if you really need the privacy (the first point), then unless you have a LOT of these closures, you will probably not even notice the additional consumption.
Third, if you have a lot of these objects (e.g. thousands) and you suspect the memory consumption might actually be an important issue, then you probably should do a quick memory test in a couple of popular browsers (one version with the private closure and one with the simpler public member variable) to see how much of a difference there is. The difference is implementation specific so could vary significantly from one browser to the next.
Based on the findings of these measurments, you could decide which way to go.
If you don't have thousands of these objects, then write your code the simplest way possible that achieves your objectives and spend your time working on things that show you they really matter once you're running your app.