Hello,
The default example for the synchronous UDP datetime server has a chance to lock indefinitely if a datagram never arrives. Since UDP is inherently unreliable, this is not a "rare unexpected exotic obscurity", it's one of the reasons why UDP might be used.
This problems is not present for TCP, since TCP has built-in failure mechanisms when a "connection" breaks. But an UDP socket doesn't have to be "connected" even in a limited UDP sense.
I would like to ask that a timeout argument is added to the "receive_from" procedure.
The same effect can be achieved using:
{
std::thread t([&]() { io_context.run(); });
t.detach();
auto recv_length = socket.async_receive_from(
boost::asio::buffer(recv_buf), sender_endpoint, 0,
boost::asio::use_future);
if (recv_length.wait_for(std::chrono::seconds(5)) != std::future_status::timeout) {
std::cout.write(recv_buf.data(), recv_length.get());
}}
However , this requires switching from "receive_from" to "async_receive_from" and spawning a thread, which is a giant over-kill for a simple synchronous application which is happy with losing a packet that might be outdated anyway.
In plain C sockets this can be achieved by setting SO_RCVTIMEO on a socket and adding a check in the main reading loop.
I have done a bit of digging into the code.
It seems that what is needed is to add a check for a timeout in
socket_ops.ipp
, functionssync_recv
,sync_recvfrom
,sync_recvmsg
.Instead of having something like
have something like
The socket option would guarantee that the loop would check condition at least once.