I'm using Mongodb with only one shard and 3 members in a replica set. I have each replica set member on a different machine. Two members each have a mongos router, one member (that has a router) has a config server, and all three have a shard server. I have one hostname that resolves to the two ip addresses of each one of the mongodb instances that has a mongos router. I'm using mongoose v.2.5.10 to connect to mongodb from my node.js application with the following code:
var mongoose = require('mongoose');
mongoose.connection.on('error', function(err){
console.error('Mongodb connection error: ', err);
});
var db = mongoose.connect('mongodb://username:password@' + mongoHost + '/database');
I simulate what would happen in a failover scenario by terminating one of the mongodb instances that has the mongos router, and when this happens, I detect that the instance is down and prune the instance's ip address from the DNS record. However, my application does not seem to correctly reconnect to mongodb, as the error event of the mongoose connection is not being emitted, and my application hangs until I restart my node server.
What might be going wrong here?
UPDATE:
I've just confirmed that the net client that the mongodb node.js client is using to connect (this.connection = net.createConnection(this.socketOptions.port, this.socketOptions.host);) is not receiving the close or error event when I manually terminate the Amazon instance. So this seems like this could be an internal node issue...