How can I keep TCP packets from being dropped?

I'm creating a program on my Android phone to send the output of the camera to a server on the same network. Here is my Java code:

camera.setPreviewCallbackWithBuffer(new Camera.PreviewCallback() {

    public void onPreviewFrame(byte[] data, Camera cam) {

        try {
            socket = new Socket("XXX.XXX.XXX.XXX", 3000);
            out = socket.getOutputStream();
            out.write(data);
            socket.close();
        } catch (Exception e) {
            e.printStackTrace();
        }

        camera.addCallbackBuffer(data);
        }

The server is a NodeJS server:

time = 0

video_server.on 'connection', (socket) ->
    buffer = []
    socket.on 'data', (data) ->
            buffer.push data
    socket.on 'end', ->
            new_time = (new Date()).getTime()
            fps = Math.round(1000/(new_time - time)*100)/100
            console.log fps
            time = new_time

            stream = fs.createWriteStream 'image.jpg'
            stream.on 'close', ->
                    console.log 'Image saved.', fps
            stream.write data for data in buffer
            stream.end()

My terminal is showing about 1.5 fps (5 Mbps). I know very little about network programming, but I do know there should definitely be enough bandwidth. Each image is 640x480x1.5 at 18 fps, which is about 63 Mbps. The local network should easily be able to handle this, but my debugger in Android is giving me a lot of "Connection refused" messages.

Any help on fixing my bad network practices would be great. (I'll get to image compression in a little bit -- but right now I need to optimize this step).

You've designed the system so that it has to do many times more work than it should have to do. You're requiring a connection to be built up and torn down for each frame transferred. That is not only killing your throughput, but it can also run you out of resources.

With a sane design, all that would be required to transfer a frame is to send and receive the frame data. With your design, for each frame, a TCP connection has to be built up (3 steps), the frame data has to be sent and received, and the TCP connection has to be torn down. Worse, the receiver cannot know it has received all of the frame data until the connection shutdown occurs. So this cannot be hidden in the background.

Design a sane protocol and the problems will go away.

Is this working at all? I do not see where you are binding to port 3000 on the server.

In any case, if this is a video stream, you should probably be using UDP instead of TCP. In UDP, packets may be dropped, but for a video stream this will probably not be noticeable. UDP communication requires much less overhead than TCP due to the number of messages exchanged. TCP contains a lot of "acking" to make sure each piece of data reaches its destination; UDP doesn't care, and thus sends less packets. In my experience, UDP based code is generally less complex than TCP based code.

_ryan