Node.js - concurrency writing and reading objects

Well, I've been looking for this topic but don't find it, so I'll ask you:

I need to write and read on an array on the server with frequency, so I decided to not using database, but I don't know what the best practice? It will be a lot of data, could it be a javascript array? Is it possible to read and write in a non-blocking method but avoid concurrency problems?

It is a MMORPG, a multiplayer game online, the data is all players online. The process will write on it almost every step a player does and will read it after that. I was thinking about a child process, or something to do the process more quickly and non-blocking, but I even know what child process is HAHA!

Thank you

Since Node.js is single threaded, anytime your code is doing something it is technically blocking the process from doing anything else. Once it hits a point where it is waiting for a callback, Node will start processing other requests until your callback comes back. How much data is 'a lot'? What do you need to do with the data?

If you're not doing much processing on the data, and there is a lot of it, a database solution wouldn't be a bad idea. Node drivers for databases (MongoDB, Redis, etc) are async and non-blocking so Node does a great job of interleaving the calls resulting in the ability to handle lots and lots of calls concurrently. Using storage like this (instead of just in memory) also means you could use Node cluster to spin up multiple node processes to use more than one core on your machine (as well as using multiple machines) to respond to requests.

If you're not doing much processing on the data, and the data set is pretty small, and you don't care about sharing the data among Node processes, then sure, just keep it in memory in whatever data structure you want. Arrays, dictionaries, or something like an LRU cache.

If you are doing lots of processing on the data, and there is a lot of it, then you'll need to do a bit more work since this isn't Nodes greatest strength (processing blocks the one and only thread which means it can't handle additional requests). I would suggest something like a PubSub model with a non-blocking queue with worker processes handling the processing.