Hi folks,
in the past I've developed a few applications using TCP Sockets and I always had a situation in mind, which might cause problems.
TCP is a stream based protocol. So, when I want to use packets, I have to create some kind of generic packet handler.
Lets say I my generic packet structure looks like this (Pseudocode)
I never had any problems with this, even with bigger packets (size > 1500 bytes) but recently I tried to create a websocket server in c# using the autobahn test suite.Code:struct GenericPacket { uint16 opcode; uint16 packetSize; byte payload[packetSize]; };
In the suite is a test were packets are send in chops
In a worst case scenario, I would receive a packet with size X while only getting a few bytes (because of lag, packet loss, etc), so I have to wait until I receive the remaining bytes.Case 1.1.8: Send text message with payload of length 65536. Sent out data in chops of 997 octets.
Now my questions: Is this the general case in network programming? Do I have take into account, that packets might come in chunked? How do I handle those scenarios properly? Or is this a problem which doesn't realy exist?
In my websocket server I've read all data from the socket into a byte buffer, so I can peek into the data more easily. But I'd never seen such stuff in any open source projects on the web. Every project I've looked into never used any kind of data buffer to make sure they get always the whole packet.