I'm trying to use SCTP for an application and everything seems to work as
expected, until I noticed that there was data loss when I tried to send larger
amounts of data (from a couple of 100kbs upwards) in a tight sequence across the
I'm using a comparatively tight loop of reading ~1kb chunks of data from a file
and sending it using sctp_sendmsg(). I first wondered whether there was a bug in
my non-blocking implementation, but when I allowed blocking there was still
data loss. And when I check and sum up the return values of sctp_sendmsg(), it
indicates a correct amount of data to have been sent and never any error is
returned. However, using Ethereal to observe the traffic, I noticed that not all
messages that are passed to sctp_sendmsg() make it to the interface.
The losses seem to be arbitrary and vary from no loss of for instance 2MB of
data to more than 50% of the messages.
Is this a know problem, or, on the other side, has this been tested in a way
that I should assume the problem lies somewhere outside lksctp?
I use the one-to-many style socket and the data transfer happens in-order.
I'm using Linux kernel 2.6.1 and the lksctp-tools-2.6.0-test7-0.7.4 and
currently I'm testing only locally, i.e. sender and receiver are on the same
computer, but communicate over the host's eth0 interface.