Sockets and buffers

I’m going to make a stab at what the difference is between the old “Use small send buffer (enable if upload slow downloads a lot)” option, in comparison to the new “Socket write buffer” and “Socket read buffer” options.

The old “small send buffer” option was added to serve as a help for those who have network connections where their upload affect their download. When DC++ send data through a socket (“a connection to another computer”), it every so often stops and waits until the other send say “yes, I got the information, give me more”. What this option does is that it sets the interval when DC++ should stop sending and start paying attention for a verification. More specifically, having this option on sets the interval (packet size) to 1 KiB, versus 16 KiB when it’s off. This mean that your drive will work more (DC++ will read more from it) and the speed of your downloads and uploads will be lower. [I have no idea why this option was removed. It won’t come back in the near future, as far as I know.]

The socket write and read buffer options are different from the old buffer option. What these options do, is that they set something called a “TCP window“.

TCP uses what is called the “congestion window”, or CWND, to determine how many packets can be sent at one time. The larger the congestion window size, the higher the throughput. The TCP “slow start” and “congestion avoidance” algorithms determine the size of the congestion window. The maximum congestion window is related to the amount of buffer space that the kernel allocates for each socket. For each socket, there is a default value for the buffer size, which can be changed by the program using a system library call just before opening the socket. There is also a kernel enforced maximum buffer size. The buffer size can be adjusted for both the send and receive ends of the socket.

To get maximal throughput it is critical to use optimal TCP send and receive socket buffer sizes for the link you are using. If the buffers are too small, the TCP congestion window will never fully open up. If the receiver buffers are too large, TCP flow control breaks and the sender can overrun the receiver, which will cause the TCP window to shut down. This is likely to happen if the sending host is faster than the receiving host. Overly large windows on the sending side is not a big problem as long as you have excess memory.

The annoying thing here is that the buffer size isn’t something we can say is the “correct” or “incorrect” value. This is something you need to try for yourself. Having said that, you can approximate them.

Take your maximum througput speed (eg, 10 Mbit/s) and multiply it with the “latency” between you and the other users. Basically, you can find out the latency by typing “cmd /k ping other_users_ip” in Run in Windows and by looking at the round-trip time. (Note that the other party may be blocking pings.) (The latency is what you normally see as “lag” in games.) What all this mean is that there’s no general formula for all users that may affect you. In any case, if you have a ping time of 50 ms to most users, you should input (10 Mbit / 8 bit) * 0.05; 62500 in DC++. The default value DC++ is using is 65535, so it’s quite close.

Dislaimer: I may be wrong about “a little” or “a lot” in this post, but I think I fairly got the bigger picture correct.

3 Responses to Sockets and buffers

  1. djoffset says:

    In general you shouldn’t need to worry about these, since the TCP window size is automatically adjusted using the Nagle algorithm. That is unless you enable the TCP_NODELAY socket option.
    Personally I don’t understand why a *user* would want to screw around with these options. I bet their network connection is fine like it is on all other applications.

  2. emtee says:

    djoffset, believe me the number of support issues prove that Small Send Buffer option helped many users’ speed problems. There are many places of the world where the QoS of the internet services are still very bad and this option could do spectacular results in some cases.

    The reason why this option was removed is still unclear despite the numerous complanits about its absence in the support forum from the time of release of 0.68 through last year until the forum gone down.

    You’re right though, the support posts are proved that *users* who suffer speed problems can’t do anything with these *magical* new options (without any help) so most of them are stick by the older versions of DC++.

    I think when the support site will be online again it may deserve a FAQ entry using information of this post.

  3. djoffset says:

    So, basically what you are saying is that by forcing a low send buffer you are enforcing a lower upload, right?
    This sounds like it would create a lot more ACK packets than a regular flow to me.

    I would say it is better to implement these limits by regulating how often (and how much) you call send() (or recv()) in the application.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: