Hi,
In the new "C++ programming with Qt4" book, in page 331, there is an example of a server-client application, and in the method that does the actual data reading from the server there is the following code:
//'in' is a QDataStream bound to the socket
if(batesAvailable() < sizeof(quint16) ) //the first fielld in the block should be sizeof quint16
return;
in>> nextBlockSize; //read the first field which holds the next block size;
if(bytesAvailable() < nextBlockSize)
return;
//code for reading the whole block.
//'in' is a QDataStream bound to the socket
if(batesAvailable() < sizeof(quint16) ) //the first fielld in the block should be sizeof quint16
return;
in>> nextBlockSize; //read the first field which holds the next block size;
if(bytesAvailable() < nextBlockSize)
return;
//code for reading the whole block.
To copy to clipboard, switch view to plain text mode
What I understand from this code is, that the readyRead() signal is emmited each time a new byte is added to the send buffer on the server side, other wise the code above would not work (and I am assuming it does).
The text in the book doesn't relate at all to this point, so I was wondering if some of you with some experience with socket programming with Qt4 could confirm my assumption.
The reason I have trouble accepting this, is that it means that QTcpSocket class will emit as many signals as bytes that the server will send, and seems to me that its a lot of overhead....
On the other hand it could be I am missing here something, and hope you can point me to what it might be
Thanks.
Bookmarks