asio-users Mailing List for asio C++ library (Page 7)
Brought to you by:
chris_kohlhoff
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(6) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
|
Feb
|
Mar
(1) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(7) |
Nov
(8) |
Dec
(11) |
2006 |
Jan
(11) |
Feb
(14) |
Mar
(8) |
Apr
(3) |
May
(27) |
Jun
(15) |
Jul
(43) |
Aug
(64) |
Sep
(19) |
Oct
(44) |
Nov
(69) |
Dec
(82) |
2007 |
Jan
(79) |
Feb
(153) |
Mar
(169) |
Apr
(148) |
May
(181) |
Jun
(114) |
Jul
(152) |
Aug
(104) |
Sep
(77) |
Oct
(128) |
Nov
(185) |
Dec
(215) |
2008 |
Jan
(100) |
Feb
(116) |
Mar
(115) |
Apr
(74) |
May
(152) |
Jun
(107) |
Jul
(117) |
Aug
(115) |
Sep
(141) |
Oct
(75) |
Nov
(31) |
Dec
(47) |
2009 |
Jan
(51) |
Feb
(65) |
Mar
(54) |
Apr
(60) |
May
(6) |
Jun
(107) |
Jul
(82) |
Aug
(133) |
Sep
(144) |
Oct
(11) |
Nov
(54) |
Dec
(26) |
2010 |
Jan
(30) |
Feb
(17) |
Mar
(93) |
Apr
(47) |
May
(93) |
Jun
(73) |
Jul
(32) |
Aug
(60) |
Sep
(59) |
Oct
(58) |
Nov
(71) |
Dec
(28) |
2011 |
Jan
(58) |
Feb
(65) |
Mar
(38) |
Apr
(83) |
May
(45) |
Jun
(70) |
Jul
(71) |
Aug
(7) |
Sep
(33) |
Oct
(65) |
Nov
(33) |
Dec
(16) |
2012 |
Jan
(13) |
Feb
(32) |
Mar
(30) |
Apr
(67) |
May
(57) |
Jun
(59) |
Jul
(8) |
Aug
(61) |
Sep
(48) |
Oct
(23) |
Nov
(29) |
Dec
(8) |
2013 |
Jan
(37) |
Feb
(20) |
Mar
(11) |
Apr
(11) |
May
(9) |
Jun
(26) |
Jul
(6) |
Aug
(18) |
Sep
(7) |
Oct
(29) |
Nov
(2) |
Dec
(17) |
2014 |
Jan
(11) |
Feb
(12) |
Mar
(6) |
Apr
(26) |
May
(17) |
Jun
(12) |
Jul
(12) |
Aug
|
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(7) |
2015 |
Jan
(3) |
Feb
(3) |
Mar
(7) |
Apr
(23) |
May
|
Jun
(1) |
Jul
(2) |
Aug
(33) |
Sep
(16) |
Oct
(2) |
Nov
(2) |
Dec
(13) |
2016 |
Jan
(7) |
Feb
(22) |
Mar
(11) |
Apr
|
May
(10) |
Jun
(2) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
(29) |
Nov
(3) |
Dec
(21) |
2017 |
Jan
(4) |
Feb
(31) |
Mar
(8) |
Apr
(4) |
May
(1) |
Jun
(4) |
Jul
(32) |
Aug
(28) |
Sep
(2) |
Oct
(11) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(10) |
Oct
|
Nov
(4) |
Dec
(20) |
2019 |
Jan
(3) |
Feb
|
Mar
(1) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2020 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
(2) |
May
(1) |
Jun
|
Jul
(11) |
Aug
(2) |
Sep
(6) |
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2023 |
Jan
(3) |
Feb
|
Mar
(5) |
Apr
(3) |
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Allen <all...@gm...> - 2017-07-30 22:55:39
|
> Allen, instead of all these Ref conters that are atomically incremented and decremented, couldn't I just make one stand per socket and post() a shutdown or close, right into this strand? I'm sure there is more than one way to do it. The refcounting ensures that close is not called until all of the pending async handlers are called. It might work without, I don't know--you could certainly try it. On Sun, Jul 30, 2017 at 5:30 PM, Benjamin Richner <be...@bl...> wrote: > Hi Vinnie, Allen > > I've been following your explanation of the shutdown sequence closely. It > helps me a lot with understanding a clean shutdown. Allen, instead of all > these Ref conters that are atomically incremented and decremented, couldn't > I just make one stand per socket and post() a shutdown or close, right into > this strand? > > Then the only problem that remains is what Vinnie said, if an acceptor has a > pending completion that is being executed after you iterated through the > socket list and closed them all. But that could be solved by implementing in > the registration function that adds sockets to the list that after closing > all sockets, a bool 'reject' is set to true and any further registration to > the list immediately shuts down/closes the socket by posting to its > corresponding strand. Of course it'd need some spinlocks or mutexes. > > Oh, that reminds me, you mentioned that the documentation says that all > functions of the socket class must not be executed concurrently ("Shared > objects: Unsafe."). Stupid question, but does that mean that sockets cannot > be true full duplex on the application level because I am not allowed to > execute read and write simultaneously? > > Regards, > Benjamin > > > On 30.07.2017 22:10, Allen wrote: >> >> I think it's ok to close the acceptor asynchronously. That triggers >> the async_accept to call its completion handler with an error >> condition, so either way, upon acceptor close or upon a new >> connection, the async_accept handler is going to get called. Note >> that the async_accept also adds work to the queue that prevents the >> run threads from returning, so the signals.async_wait is not needed as >> long as the io_service has a listening socket, however the >> signals.async_wait is required with a server that only initiates >> outgoing connections, and works just as well with a server that has a >> listening socket. >> >> >> On Sun, Jul 30, 2017 at 1:22 PM, Vinnie Falco <vin...@gm...> >> wrote: >>> >>> On Sun, Jul 30, 2017 at 10:13 AM, Allen <all...@gm...> wrote: >>>> >>>> ok, looking closer again... >>> >>> I admit, this sounds very convincing :) >>> >>> One last point, what if the acceptor has a pending completion at the >>> time the SIGTERM handler is invoked? >>> >>> If it iterates the list before the completion is processed, then a new >>> connection will be established after you have already closed the >>> existing connections. >>> >>> Thanks >>> >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> asio-users mailing list >>> asi...@li... >>> https://lists.sourceforge.net/lists/listinfo/asio-users >>> _______________________________________________ >>> Using Asio? List your project at >>> http://think-async.com/Asio/WhoIsUsingAsio >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> asio-users mailing list >> asi...@li... >> https://lists.sourceforge.net/lists/listinfo/asio-users >> _______________________________________________ >> Using Asio? List your project at >> http://think-async.com/Asio/WhoIsUsingAsio > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Vinnie F. <vin...@gm...> - 2017-07-30 22:30:32
|
On Sun, Jul 30, 2017 at 2:30 PM, Benjamin Richner <be...@bl...> wrote: > Hi Vinnie, Allen Greetings! > I've been following your explanation of the shutdown sequence closely. It > helps me a lot with understanding a clean shutdown. Its very tricky to get right. So difficult that at one job I loudly announced victory over implementing a very important feature called... "exit" :) > Oh, that reminds me, you mentioned that the documentation says that all > functions of the socket class must not be executed concurrently ("Shared > objects: Unsafe."). Stupid question, but does that mean that sockets cannot > be true full duplex on the application level because I am not allowed to > execute read and write simultaneously? This is a common misconception. Concurrent calls to socket member functions is orthogonal to full duplex support. You can have pending asynchronous read and write operations *active* but you can't call async_read and async_write *concurrently*. In fact you can't call any member function of ip::tcp::socket concurrently, that includes `cancel` and `close`, and more obviously the destructor. This is a fine point missed by many who use the library. My advice is to VERY carefully read and re-read the Asio docs.. its all there. Clear your mind of assumptions before embarking on a reading. |
From: Benjamin R. <be...@bl...> - 2017-07-30 21:44:59
|
Hi Vinnie, Allen I've been following your explanation of the shutdown sequence closely. It helps me a lot with understanding a clean shutdown. Allen, instead of all these Ref conters that are atomically incremented and decremented, couldn't I just make one stand per socket and post() a shutdown or close, right into this strand? Then the only problem that remains is what Vinnie said, if an acceptor has a pending completion that is being executed after you iterated through the socket list and closed them all. But that could be solved by implementing in the registration function that adds sockets to the list that after closing all sockets, a bool 'reject' is set to true and any further registration to the list immediately shuts down/closes the socket by posting to its corresponding strand. Of course it'd need some spinlocks or mutexes. Oh, that reminds me, you mentioned that the documentation says that all functions of the socket class must not be executed concurrently ("Shared objects: Unsafe."). Stupid question, but does that mean that sockets cannot be true full duplex on the application level because I am not allowed to execute read and write simultaneously? Regards, Benjamin On 30.07.2017 22:10, Allen wrote: > I think it's ok to close the acceptor asynchronously. That triggers > the async_accept to call its completion handler with an error > condition, so either way, upon acceptor close or upon a new > connection, the async_accept handler is going to get called. Note > that the async_accept also adds work to the queue that prevents the > run threads from returning, so the signals.async_wait is not needed as > long as the io_service has a listening socket, however the > signals.async_wait is required with a server that only initiates > outgoing connections, and works just as well with a server that has a > listening socket. > > > On Sun, Jul 30, 2017 at 1:22 PM, Vinnie Falco <vin...@gm...> wrote: >> On Sun, Jul 30, 2017 at 10:13 AM, Allen <all...@gm...> wrote: >>> ok, looking closer again... >> I admit, this sounds very convincing :) >> >> One last point, what if the acceptor has a pending completion at the >> time the SIGTERM handler is invoked? >> >> If it iterates the list before the completion is processed, then a new >> connection will be established after you have already closed the >> existing connections. >> >> Thanks >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> asio-users mailing list >> asi...@li... >> https://lists.sourceforge.net/lists/listinfo/asio-users >> _______________________________________________ >> Using Asio? List your project at >> http://think-async.com/Asio/WhoIsUsingAsio > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Allen <all...@gm...> - 2017-07-30 20:10:50
|
I think it's ok to close the acceptor asynchronously. That triggers the async_accept to call its completion handler with an error condition, so either way, upon acceptor close or upon a new connection, the async_accept handler is going to get called. Note that the async_accept also adds work to the queue that prevents the run threads from returning, so the signals.async_wait is not needed as long as the io_service has a listening socket, however the signals.async_wait is required with a server that only initiates outgoing connections, and works just as well with a server that has a listening socket. On Sun, Jul 30, 2017 at 1:22 PM, Vinnie Falco <vin...@gm...> wrote: > On Sun, Jul 30, 2017 at 10:13 AM, Allen <all...@gm...> wrote: >> ok, looking closer again... > > I admit, this sounds very convincing :) > > One last point, what if the acceptor has a pending completion at the > time the SIGTERM handler is invoked? > > If it iterates the list before the completion is processed, then a new > connection will be established after you have already closed the > existing connections. > > Thanks > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Vinnie F. <vin...@gm...> - 2017-07-30 17:22:59
|
On Sun, Jul 30, 2017 at 10:13 AM, Allen <all...@gm...> wrote: > ok, looking closer again... I admit, this sounds very convincing :) One last point, what if the acceptor has a pending completion at the time the SIGTERM handler is invoked? If it iterates the list before the completion is processed, then a new connection will be established after you have already closed the existing connections. Thanks |
From: Allen <all...@gm...> - 2017-07-30 17:13:51
|
ok, looking closer again... I keep a list of all created sockets, whether open or closed, and call "Stop" and "WaitStop" on all of them in my SIGTERM handler. Stop does the following: acquires a spin lock checks a socket stopping flag (an atomic_int) and returns if it is set sets the socket stopping flag calls timer.cancel() calls socket.cancel() if there are no pending async operations, it posts an async call to DoStop(), otherwise, the call to DoStop() is deferred until one of the already pending async handlers calls DecRef IncRef is called to initiate any async operation. It acquires the spin lock, and if the socket stopping flag is set is returns immediately, otherwise it increments the async pending count and initiates the async operation DecRef is called at the beginning of all async handlers. It acquires the spin lock, decrements the async pending count, and if the stopping flag is set and the async pending count is zero, it calls DoStop DoStop calls socket.close and sets a socket closed flag (an atomic_int) WaitStop waits on the socket closed flag with a usleep(100) in between each check To test the code for race conditions, I have an option to compile in strategic calls to sleep(1) On Sun, Jul 30, 2017 at 12:41 PM, Allen <all...@gm...> wrote: >> How do you maintain the list of sockets? Is it protected by a mutex? > When do you insert the socket in the list? When do you remove the > socket from the list? Do these happen while holding the mutex? > > of course > > > On Sun, Jul 30, 2017 at 12:24 PM, Vinnie Falco <vin...@gm...> wrote: >> On Sun, Jul 30, 2017 at 9:13 AM, Allen <all...@gm...> wrote: >>> I have clean shutdown working fine. >>> >>> In the signals.async_wait handler, I call close() on all open sockets, >>> including the acceptor. >> >> How do you know there aren't any race conditions? What happen if there >> is already a completion for the acceptor in the queue at the time that >> you close all sockets? When you call close on the acceptor are you >> doing it in the right context (i.e. from a thread invoking >> io_service::run, possibly strand-wrapped)? >> >> How do you maintain the list of sockets? Is it protected by a mutex? >> When do you insert the socket in the list? When do you remove the >> socket from the list? Do these happen while holding the mutex? >> >> I've seen and implemented many designs for managing the list of open >> sockets, its very tricky to get right and very easy to have a race >> condition that you don't know about. >> >> Thanks >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> asio-users mailing list >> asi...@li... >> https://lists.sourceforge.net/lists/listinfo/asio-users >> _______________________________________________ >> Using Asio? List your project at >> http://think-async.com/Asio/WhoIsUsingAsio |
From: Allen <all...@gm...> - 2017-07-30 16:42:01
|
> How do you maintain the list of sockets? Is it protected by a mutex? When do you insert the socket in the list? When do you remove the socket from the list? Do these happen while holding the mutex? of course On Sun, Jul 30, 2017 at 12:24 PM, Vinnie Falco <vin...@gm...> wrote: > On Sun, Jul 30, 2017 at 9:13 AM, Allen <all...@gm...> wrote: >> I have clean shutdown working fine. >> >> In the signals.async_wait handler, I call close() on all open sockets, >> including the acceptor. > > How do you know there aren't any race conditions? What happen if there > is already a completion for the acceptor in the queue at the time that > you close all sockets? When you call close on the acceptor are you > doing it in the right context (i.e. from a thread invoking > io_service::run, possibly strand-wrapped)? > > How do you maintain the list of sockets? Is it protected by a mutex? > When do you insert the socket in the list? When do you remove the > socket from the list? Do these happen while holding the mutex? > > I've seen and implemented many designs for managing the list of open > sockets, its very tricky to get right and very easy to have a race > condition that you don't know about. > > Thanks > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Allen <all...@gm...> - 2017-07-30 16:40:43
|
yes, checking again, my graceful shutdown code is a fair bit more complicated than I said. When starting any async operation, I increment a refcount, which is then DecRef the refcount when the async handler is called. On shutdown, socket.close() is called immediately if the refcount is zero, otherwise the shutdown is flagged and the DecRef function calls socket.close when the refcount reaches zero. Of course, all of the reference counting and checking as well as the socket.close are protected by a mutex (which in this case is a fast spin lock, not an OS mutex). On Sun, Jul 30, 2017 at 12:13 PM, Vinnie Falco <vin...@gm...> wrote: > On Sun, Jul 30, 2017 at 8:02 AM, Yuri Timenkov <yu...@ti...> wrote: >> In practice if operations on socket are naturally serialized but not >> necessary dispatched via io_service (for whatever reasons), calling >> shutdown() is a lesser evil (IIRC it doesn’t do anything to ASIO object, > > You are correct that if the call to close() is already made in the > proper context, then it is not necessary to post through the > io_service. For example if strand::running_in_this_thread returns > `true` then close() may be called directly. Or if the caller is in an > implicit strand context (only one thread in the io_service, and the > caller is being invoked from a stack frame that contains the call to > io_service::run) then a post is similarly unnecessary. > > However, consider the scenario described by the original poster. He > wants to close all the existing sockets. That implies the operation is > not being performed from an io_service thread or strand, but rather an > unrelated control thread. Perhaps the thread which is executing > main(). > > In this case, It is necessary to invoke socket::close using the same > method as that used to invoke completion handlers used with the > socket. This could be a naked call to post() if there is only one > io_service thread, or a call to post() with a strand-wrapped lambda if > the socket uses a strand. > > In the latest version of Asio with Net-TS this would be done by > calling sock.get_executor().post(...) > >> I’m not aware of any problems on Linux or Windows >> (and it seems StackOverflow’s wisdom suggests the same: >> https://stackoverflow.com/a/27790293). > > There are plenty of wrong answers on Stack Overflow. It really is very > simple, if you want well defined behavior then you must not call > member functions of basic_socket concurrently. It doesn't matter that > you "think" its safe. Undefined behavior should always be avoided if > possible. > > Thanks > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Vinnie F. <vin...@gm...> - 2017-07-30 16:24:39
|
On Sun, Jul 30, 2017 at 9:13 AM, Allen <all...@gm...> wrote: > I have clean shutdown working fine. > > In the signals.async_wait handler, I call close() on all open sockets, > including the acceptor. How do you know there aren't any race conditions? What happen if there is already a completion for the acceptor in the queue at the time that you close all sockets? When you call close on the acceptor are you doing it in the right context (i.e. from a thread invoking io_service::run, possibly strand-wrapped)? How do you maintain the list of sockets? Is it protected by a mutex? When do you insert the socket in the list? When do you remove the socket from the list? Do these happen while holding the mutex? I've seen and implemented many designs for managing the list of open sockets, its very tricky to get right and very easy to have a race condition that you don't know about. Thanks |
From: Allen <all...@gm...> - 2017-07-30 16:13:30
|
I have clean shutdown working fine. On sever initialization, I create a boost::asio::signal_set signals(io_service) and then call signals.add(SIGINT), signals.add(SIGTERM), and signals.async_wait(...) If the server listens at a port, I create a boost::asio::ip::tcp::acceptor(io_service), then call acceptor.open(), acceptor.bind(), acceptor.listen() and acceptor.async_accept(). In the async_accept handler, if acceptor.is_open(), then I call acceptor.async_accept() again. I then call io_service.run() with one or more threads. Those threads keep a list of open sockets, whether opened through async_accept or through boost::asio::ip::tcp::socket(io_service).connect(). When I eventually want to shut down, I call raise(SIGTERM). I then join() all of the threads that had called run(). When all the join calls complete, the server has been gracefully shutdown and all the objects can be destroyed. In the signals.async_wait handler, I call close() on all open sockets, including the acceptor. Note that close may cause some async read and write handlers to be called with error conditions, and these async handlers need to recognize this as part of the graceful shutdown process. This works because the signals.async_wait call ensures there is always work to do, so io_service.run() will block until SIGTERM is raised and all of the async handlers finish. Note that at no time do I call io_service.stop() or make any calls on the io_service other than run(). On Sun, Jul 30, 2017 at 10:32 AM, Vinnie Falco <vin...@gm...> wrote: >>>>On Sun, Jul 30, 2017 at 6:53 AM, Benjamin Richner <be...@bl...> wrote: >> So Yuri seems to be correct - it is not safe to destroy io_service after >> calling io_service::stop(). It would be nice if that was added to the >> documentation if that is indeed the case. In my eyes, as an unknowing >> library user, the stop() call made sense. > > The documentation is crystal clear about this, io_service::stop makes > no claims that it cancels pending I/O: > <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/io_service/stop.html > >> That also means I have to be careful to close all sockets > > You have these choices: > > * Call close on every basic_io_object derived class (socket is such > class, but also consider timers, resolvers, and acceptors) > > * Call cancel on all I/O objects > > * Destroy all I/O objects > >> if there are still pending operations, then deleting io_service::work does not >> let all threads return from run() and then my main thread is blocked on the >> thread join call forever. > > That is correct > >> I'll probably keep a list of all the existing >> sockets somewhere and close them all when my program exits, just to be sure. > > Just a heads up, this is more difficult than it sounds, because > sockets can be created and destroyed as part of the normal process of > establishing and releasing connections. I have not found a > satisfactory solution to this problem, I'm very much interested in > hearing from others about potential solutions. > >> That brings me to my last question: Does the library handle double close >> gracefully? I.e. calling socket.close(ec) twice in a row on the same socket. >> Or is that undefined behaviour? I think that would be good to know. I will >> probably have to save some state to avoid double close. > > The documentation is again clear on this: > "Note that, even if the function indicates an error, the underlying > descriptor is closed." > <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/basic_stream_socket/close/overload2.html> > > Somewhat unrelated, have you seen this? > <https://github.com/boostorg/beast> > > Thanks > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Vinnie F. <vin...@gm...> - 2017-07-30 16:13:18
|
On Sun, Jul 30, 2017 at 8:02 AM, Yuri Timenkov <yu...@ti...> wrote: > In practice if operations on socket are naturally serialized but not > necessary dispatched via io_service (for whatever reasons), calling > shutdown() is a lesser evil (IIRC it doesn’t do anything to ASIO object, You are correct that if the call to close() is already made in the proper context, then it is not necessary to post through the io_service. For example if strand::running_in_this_thread returns `true` then close() may be called directly. Or if the caller is in an implicit strand context (only one thread in the io_service, and the caller is being invoked from a stack frame that contains the call to io_service::run) then a post is similarly unnecessary. However, consider the scenario described by the original poster. He wants to close all the existing sockets. That implies the operation is not being performed from an io_service thread or strand, but rather an unrelated control thread. Perhaps the thread which is executing main(). In this case, It is necessary to invoke socket::close using the same method as that used to invoke completion handlers used with the socket. This could be a naked call to post() if there is only one io_service thread, or a call to post() with a strand-wrapped lambda if the socket uses a strand. In the latest version of Asio with Net-TS this would be done by calling sock.get_executor().post(...) > I’m not aware of any problems on Linux or Windows > (and it seems StackOverflow’s wisdom suggests the same: > https://stackoverflow.com/a/27790293). There are plenty of wrong answers on Stack Overflow. It really is very simple, if you want well defined behavior then you must not call member functions of basic_socket concurrently. It doesn't matter that you "think" its safe. Undefined behavior should always be avoided if possible. Thanks |
From: Yuri T. <yu...@ti...> - 2017-07-30 15:52:47
|
Hi Vinnie, I agree that to be 100% safe ALL operations with socket should be done via io_service (or via strand if using thread pool). In other words, the advice with posting close() works if all other operations are already done ONLY through io_service. In practice if operations on socket are naturally serialized but not necessary dispatched via io_service (for whatever reasons), calling shutdown() is a lesser evil (IIRC it doesn’t do anything to ASIO object, but directly calls kernel). I’m not aware of any problems on Linux or Windows (and it seems StackOverflow’s wisdom suggests the same: https://stackoverflow.com/a/27790293). Regards, Yuri From: Vinnie Falco<mailto:vin...@gm...> Sent: den 30 juli 2017 16:59 To: asi...@li...<mailto:asi...@li...> Subject: Re: [asio-users] Segfault in ~io_service() > What I recommend is to call shutdown() instead. This is a call to the kernel > and should be naturally thread- and double-call-safe, and causes all pending > operation to abort. This is undefined behavior according to the Asio documentation: > Thread Safety > Distinct objects: Safe. > Shared objects: Unsafe. <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/ip__tcp/socket.html> If you want your program to be portable you cannot assume that ip::tcp::socket::shutdown is thread-safe especially if the documentation states otherwise in clear terms. Instead, post a call to `close` to the `io_service`, like this: void safe_close(ip::tcp::socket& sock) { sock.get_io_service().post( [&sock] { boost::system::error_code ec; sock.close(ec); }); } If you have a strand, you could write: void safe_close(io_service::strand& strand, ip::tcp::socket& sock) { if(strand.running_in_this_thread()) { error_code ec; sock.close(ec); return; } sock.get_io_service().post(strand.wrap( [&sock] { boost::system::error_code ec; sock.close(ec); })); } Thanks ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ asio-users mailing list asi...@li... https://lists.sourceforge.net/lists/listinfo/asio-users _______________________________________________ Using Asio? List your project at http://think-async.com/Asio/WhoIsUsingAsio |
From: Vinnie F. <vin...@gm...> - 2017-07-30 14:59:34
|
> What I recommend is to call shutdown() instead. This is a call to the kernel > and should be naturally thread- and double-call-safe, and causes all pending > operation to abort. This is undefined behavior according to the Asio documentation: > Thread Safety > Distinct objects: Safe. > Shared objects: Unsafe. <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/ip__tcp/socket.html> If you want your program to be portable you cannot assume that ip::tcp::socket::shutdown is thread-safe especially if the documentation states otherwise in clear terms. Instead, post a call to `close` to the `io_service`, like this: void safe_close(ip::tcp::socket& sock) { sock.get_io_service().post( [&sock] { boost::system::error_code ec; sock.close(ec); }); } If you have a strand, you could write: void safe_close(io_service::strand& strand, ip::tcp::socket& sock) { if(strand.running_in_this_thread()) { error_code ec; sock.close(ec); return; } sock.get_io_service().post(strand.wrap( [&sock] { boost::system::error_code ec; sock.close(ec); })); } Thanks |
From: Yuri T. <yu...@ti...> - 2017-07-30 14:52:36
|
Hi Benjamin, Glad that it helped. Double-close should be relatively safe: as I remember (worked with ASIO more than 5 years ago, but you may look into implementation, it’s a very small function) close() resets internal descriptor, so successive call will return error. This means you should be careful and prefer overloaded function which fills error_code rather than throws an exception. Another thing to keep in mind is that close() is not thread safe (because it modifies object state), so you should be careful calling it. What I recommend is to call shutdown() instead. This is a call to the kernel and should be naturally thread- and double-call-safe, and causes all pending operation to abort. After that you can just rely on automatic descriptor deallocation in the socket destructor (in other words you don’t need to call close()). Regards, Yuri From: Benjamin Richner<mailto:be...@bl...> Sent: den 30 juli 2017 15:54 To: asi...@li...<mailto:asi...@li...> Subject: Re: [asio-users] Segfault in ~io_service() Hi Yuri, Jeremy, Allen Thank you for your quick replies. I tried Yuri's advice and removed the io_service::stop() call and just deleted the io_service::work instead. So far, it hasn't segfaulted again. So Yuri seems to be correct - it is not safe to destroy io_service after calling io_service::stop(). It would be nice if that was added to the documentation if that is indeed the case. In my eyes, as an unknowing library user, the stop() call made sense. That also means I have to be careful to close all sockets and so on - if there are still pending operations, then deleting io_service::work does not let all threads return from run() and then my main thread is blocked on the thread join call forever. I'll probably keep a list of all the existing sockets somewhere and close them all when my program exits, just to be sure. That brings me to my last question: Does the library handle double close gracefully? I.e. calling socket.close(ec) twice in a row on the same socket. Or is that undefined behaviour? I think that would be good to know. I will probably have to save some state to avoid double close. Regards, Benjamin On 29.07.2017 20:58, Yuri Timenkov wrote: Hi Benjamin, io_service is the event loop which processes callbacks attached to async operations. Sockets themselves don’t require io_service (mostly), but handlers attached to async operations do because once I/O operation completes it posts handler to io_service which already destroyed in your case (sockets and io_service refer to each other by a plain reference, not a smart pointer). Even if you close socket, corresponding handlers for all pending operations will be executed with cancelled() error code (or similar). So safe way is to not call io_service::stop but rather destroy work object and tell all pending operations to cancel (e.g. close or shutdown sockets, cancel timers, etc). Only then when io_service::run gracefully returns it is safe to destroy it. Calling io_service::stop() immediately stops processing any callbacks and doesn’t wait for anything, you need to call io_service::run() again for safe shutdown. There is no way to tell asio to not do anything, because any pending operation is referenced from the kernel and the only way to cancel it is to close or shutdown the socket and wait until corresponding handler is executed. Regards, Yuri From: Benjamin Richner<mailto:be...@bl...> Sent: den 29 juli 2017 15:36 To: asi...@li...<mailto:asi...@li...> Subject: [asio-users] Segfault in ~io_service() Hi everyone, I am experiencing a segmentation fault when the io_service is being deleted (by means of going out of scope) and I am having trouble tracking down the source of the problem. Now, before I delete the io_service, I - delete the asio::io_service::work - call io_service.stop(); - let all my worker threads leave run() and return - join all my worker threads in the main thread (my only remaining thread running) Also worth pointing out, all my sockets are inside shared_ptr'd objects, as is customary with asio, and at the point in time when ~io_service() is called, none of these shared_ptr's should be around except those that were bound with std::bind and then passed into calls like asio::async_read. I do NOT - wait for completion of all my pending async calls (io_service.stop() stops whenever it does, and all run() calls return, but there could still be some pending work, but I don't want to execute it) - close all sockets First question: Does the library require me to do those two things before I call ~io_service()? Theoretically, in ~io_service(), all the remaining shared pointers should be cleaned up, and thus the sockets inside, and thus they should be automatically closed on their destructor call, right? Now, the crash I am experiencing is very finicky - I cannot reproduce it with a debugger, but I reproduced a stack trace by manual inspection and with the help of Dr. Mingw. I am using - Windows 10 - g++ (i686-win32-dwarf-rev1, Built by MinGW-W64 project) 6.3.0 - asio (standalone, asio-1.10.6.zip) as linked on http://think-async.com/Asio/Download The following is the stack trace: [.\include\asio\detail\call_stack.hpp @ 108], static Value* top() return elem ? elem->value_ : 0; [.\include\asio\implhandler_alloc_hook.ipp @ 66], void asio_handler_deallocate(void* pointer, std::size_t size, ...) thread_info::deallocate(call_stack::top(), pointer, size); [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename Handler> inline void deallocate(void* p, std::size_t s, Handler& h) asio_handler_deallocate(p, s, asio::detail::addressof(h)); [.\include\asio\impl\read.hpp @ 479], template <typename AsyncReadStream, typename MutableBufferSequence, typename CompletionCondition, typename ReadHandler> inline void asio_handler_deallocate(void* pointer, std::size_t size, read_op<AsyncReadStream, MutableBufferSequence, CompletionCondition, ReadHandler>* this_handler) asio_handler_alloc_helpers::deallocate( [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename Handler> inline void deallocate(void* p, std::size_t s, Handler& h) asio_handler_deallocate(p, s, asio::detail::addressof(h)); [.\include\asio\detail\handler_alloc_helpers.hpp @ 73], (macro code) ASIO_DEFINE_HANDLER_PTR(op), void reset() asio_handler_alloc_helpers::deallocate(v, sizeof(op), *h); \ [.\include\asio\detail\win_iocp_socket_recv_op.hpp @ 89], static void do_complete(io_service_impl* owner, operation* base, const asio::error_code& result_ec, std::size_t bytes_transferred) p.reset(); [.\include\asio\detail\win_iocp_operation.hpp @ 50], void destroy() func_(0, this, asio::error_code(), 0); [.\include\asio\detail\impl\win_iocp_io_service.ipp @ 125], void win_iocp_io_service::shutdown_service() static_cast<win_iocp_operation*>(overlapped)->destroy(); [.\include\asio\detail\impl\service_registry.ipp @ 36], service_registry::~service_registry() service->shutdown_service(); [.\include\asio\impl\io_service.ipp @ 52], io_service::~io_service() delete service_registry_; [...my own code] The variable 'elem' is not NULL, but it crashes on 'elem->value'. The crash seems to happen whenever there are pending asio::async_read calls that were not executed because I called io_service.stop(), but it is highly timing-dependent. I am still working on a minimal example that does not drag along my entire source code, but it's a hard to reproduce bug. Phew, that was a long post, thanks for making it here! I would appreciate your help greatly. Regards, Benjamin Richner ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ asio-users mailing list asi...@li...<mailto:asi...@li...> https://lists.sourceforge.net/lists/listinfo/asio-users _______________________________________________ Using Asio? List your project at http://think-async.com/Asio/WhoIsUsingAsio ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ asio-users mailing list asi...@li...<mailto:asi...@li...> https://lists.sourceforge.net/lists/listinfo/asio-users _______________________________________________ Using Asio? List your project at http://think-async.com/Asio/WhoIsUsingAsio |
From: Vinnie F. <vin...@gm...> - 2017-07-30 14:33:02
|
>>>On Sun, Jul 30, 2017 at 6:53 AM, Benjamin Richner <be...@bl...> wrote: > So Yuri seems to be correct - it is not safe to destroy io_service after > calling io_service::stop(). It would be nice if that was added to the > documentation if that is indeed the case. In my eyes, as an unknowing > library user, the stop() call made sense. The documentation is crystal clear about this, io_service::stop makes no claims that it cancels pending I/O: <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/io_service/stop.html > That also means I have to be careful to close all sockets You have these choices: * Call close on every basic_io_object derived class (socket is such class, but also consider timers, resolvers, and acceptors) * Call cancel on all I/O objects * Destroy all I/O objects > if there are still pending operations, then deleting io_service::work does not > let all threads return from run() and then my main thread is blocked on the > thread join call forever. That is correct > I'll probably keep a list of all the existing > sockets somewhere and close them all when my program exits, just to be sure. Just a heads up, this is more difficult than it sounds, because sockets can be created and destroyed as part of the normal process of establishing and releasing connections. I have not found a satisfactory solution to this problem, I'm very much interested in hearing from others about potential solutions. > That brings me to my last question: Does the library handle double close > gracefully? I.e. calling socket.close(ec) twice in a row on the same socket. > Or is that undefined behaviour? I think that would be good to know. I will > probably have to save some state to avoid double close. The documentation is again clear on this: "Note that, even if the function indicates an error, the underlying descriptor is closed." <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/basic_stream_socket/close/overload2.html> Somewhat unrelated, have you seen this? <https://github.com/boostorg/beast> Thanks |
From: Benjamin R. <be...@bl...> - 2017-07-30 13:53:54
|
Hi Yuri, Jeremy, Allen Thank you for your quick replies. I tried Yuri's advice and removed the io_service::stop() call and just deleted the io_service::work instead. So far, it hasn't segfaulted again. So Yuri seems to be correct - it is not safe to destroy io_service after calling io_service::stop(). It would be nice if that was added to the documentation if that is indeed the case. In my eyes, as an unknowing library user, the stop() call made sense. That also means I have to be careful to close all sockets and so on - if there are still pending operations, then deleting io_service::work does not let all threads return from run() and then my main thread is blocked on the thread join call forever. I'll probably keep a list of all the existing sockets somewhere and close them all when my program exits, just to be sure. That brings me to my last question: Does the library handle double close gracefully? I.e. calling socket.close(ec) twice in a row on the same socket. Or is that undefined behaviour? I think that would be good to know. I will probably have to save some state to avoid double close. Regards, Benjamin On 29.07.2017 20:58, Yuri Timenkov wrote: > > Hi Benjamin, > > io_service is the event loop which processes callbacks attached to > async operations. > > Sockets themselves don’t require io_service (mostly), but handlers > attached to async operations do because once I/O operation completes > it posts handler to io_service which already destroyed in your case > (sockets and io_service refer to each other by a plain reference, not > a smart pointer). Even if you close socket, corresponding handlers for > all pending operations will be executed with cancelled() error code > (or similar). > > So safe way is to not call io_service::stop but rather destroy work > object and tell all pending operations to cancel (e.g. close or > shutdown sockets, cancel timers, etc). Only then when io_service::run > gracefully returns it is safe to destroy it. > > Calling io_service::stop() immediately stops processing any callbacks > and doesn’t wait for anything, you need to call io_service::run() > again for safe shutdown. > > There is no way to tell asio to not do anything, because any pending > operation is referenced from the kernel and the only way to cancel it > is to close or shutdown the socket and wait until corresponding > handler is executed. > > Regards, > > Yuri > > *From: *Benjamin Richner <mailto:be...@bl...> > *Sent: *den 29 juli 2017 15:36 > *To: *asi...@li... > <mailto:asi...@li...> > *Subject: *[asio-users] Segfault in ~io_service() > > Hi everyone, > > I am experiencing a segmentation fault when the io_service is being > deleted (by means of going out of scope) and I am having trouble > tracking down the source of the problem. > > Now, before I delete the io_service, I > - delete the asio::io_service::work > - call io_service.stop(); > - let all my worker threads leave run() and return > - join all my worker threads in the main thread (my only remaining > thread running) > > Also worth pointing out, all my sockets are inside shared_ptr'd objects, > as is customary with asio, and at the point in time when ~io_service() > is called, none of these shared_ptr's should be around except those that > were bound with std::bind and then passed into calls like > asio::async_read. > > I do NOT > - wait for completion of all my pending async calls (io_service.stop() > stops whenever it does, and all run() calls return, but there could > still be some pending work, but I don't want to execute it) > - close all sockets > > First question: Does the library require me to do those two things > before I call ~io_service()? > Theoretically, in ~io_service(), all the remaining shared pointers > should be cleaned up, and thus the sockets inside, and thus they should > be automatically closed on their destructor call, right? > > Now, the crash I am experiencing is very finicky - I cannot reproduce it > with a debugger, but I reproduced a stack trace by manual inspection and > with the help of Dr. Mingw. > > I am using > - Windows 10 > - g++ (i686-win32-dwarf-rev1, Built by MinGW-W64 project) 6.3.0 > - asio (standalone, asio-1.10.6.zip) as linked on > http://think-async.com/Asio/Download > > The following is the stack trace: > > [.\include\asio\detail\call_stack.hpp @ 108], static Value* top() > return elem ? elem->value_ : 0; > > [.\include\asio\implhandler_alloc_hook.ipp @ 66], void > asio_handler_deallocate(void* pointer, std::size_t size, ...) > thread_info::deallocate(call_stack::top(), pointer, size); > > [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template > <typename Handler> inline void deallocate(void* p, std::size_t s, > Handler& h) > asio_handler_deallocate(p, s, asio::detail::addressof(h)); > > [.\include\asio\impl\read.hpp @ 479], template <typename > AsyncReadStream, typename MutableBufferSequence, typename > CompletionCondition, typename ReadHandler> inline void > asio_handler_deallocate(void* pointer, std::size_t size, > read_op<AsyncReadStream, MutableBufferSequence, CompletionCondition, > ReadHandler>* this_handler) > asio_handler_alloc_helpers::deallocate( > > [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template > <typename Handler> inline void deallocate(void* p, std::size_t s, > Handler& h) > asio_handler_deallocate(p, s, asio::detail::addressof(h)); > > [.\include\asio\detail\handler_alloc_helpers.hpp @ 73], (macro code) > ASIO_DEFINE_HANDLER_PTR(op), void reset() > asio_handler_alloc_helpers::deallocate(v, sizeof(op), *h); \ > > [.\include\asio\detail\win_iocp_socket_recv_op.hpp @ 89], static void > do_complete(io_service_impl* owner, operation* base, const > asio::error_code& result_ec, std::size_t bytes_transferred) > p.reset(); > > [.\include\asio\detail\win_iocp_operation.hpp @ 50], void destroy() > func_(0, this, asio::error_code(), 0); > > [.\include\asio\detail\impl\win_iocp_io_service.ipp @ 125], void > win_iocp_io_service::shutdown_service() > static_cast<win_iocp_operation*>(overlapped)->destroy(); > > [.\include\asio\detail\impl\service_registry.ipp @ 36], > service_registry::~service_registry() > service->shutdown_service(); > > [.\include\asio\impl\io_service.ipp @ 52], io_service::~io_service() > delete service_registry_; > > [...my own code] > > The variable 'elem' is not NULL, but it crashes on 'elem->value'. The > crash seems to happen whenever there are pending asio::async_read calls > that were not executed because I called io_service.stop(), but it is > highly timing-dependent. > > I am still working on a minimal example that does not drag along my > entire source code, but it's a hard to reproduce bug. > > Phew, that was a long post, thanks for making it here! I would > appreciate your help greatly. > > Regards, > Benjamin Richner > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Allen <all...@gm...> - 2017-07-29 22:35:00
|
I think the key is to wait for all the threads that called io_service.run() to terminate before destroying the io_service. You can wait for a std:thead to terminate by calling std::thread.join() On Sat, Jul 29, 2017 at 9:35 AM, Benjamin Richner <be...@bl...> wrote: > Hi everyone, > > I am experiencing a segmentation fault when the io_service is being deleted > (by means of going out of scope) and I am having trouble tracking down the > source of the problem. > > Now, before I delete the io_service, I > - delete the asio::io_service::work > - call io_service.stop(); > - let all my worker threads leave run() and return > - join all my worker threads in the main thread (my only remaining thread > running) > > Also worth pointing out, all my sockets are inside shared_ptr'd objects, as > is customary with asio, and at the point in time when ~io_service() is > called, none of these shared_ptr's should be around except those that were > bound with std::bind and then passed into calls like asio::async_read. > > I do NOT > - wait for completion of all my pending async calls (io_service.stop() stops > whenever it does, and all run() calls return, but there could still be some > pending work, but I don't want to execute it) > - close all sockets > > First question: Does the library require me to do those two things before I > call ~io_service()? > Theoretically, in ~io_service(), all the remaining shared pointers should be > cleaned up, and thus the sockets inside, and thus they should be > automatically closed on their destructor call, right? > > Now, the crash I am experiencing is very finicky - I cannot reproduce it > with a debugger, but I reproduced a stack trace by manual inspection and > with the help of Dr. Mingw. > > I am using > - Windows 10 > - g++ (i686-win32-dwarf-rev1, Built by MinGW-W64 project) 6.3.0 > - asio (standalone, asio-1.10.6.zip) as linked on > http://think-async.com/Asio/Download > > The following is the stack trace: > > [.\include\asio\detail\call_stack.hpp @ 108], static Value* top() > return elem ? elem->value_ : 0; > > [.\include\asio\implhandler_alloc_hook.ipp @ 66], void > asio_handler_deallocate(void* pointer, std::size_t size, ...) > thread_info::deallocate(call_stack::top(), pointer, size); > > [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename > Handler> inline void deallocate(void* p, std::size_t s, Handler& h) > asio_handler_deallocate(p, s, asio::detail::addressof(h)); > > [.\include\asio\impl\read.hpp @ 479], template <typename AsyncReadStream, > typename MutableBufferSequence, typename CompletionCondition, typename > ReadHandler> inline void asio_handler_deallocate(void* pointer, std::size_t > size, read_op<AsyncReadStream, MutableBufferSequence, CompletionCondition, > ReadHandler>* this_handler) > asio_handler_alloc_helpers::deallocate( > > [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename > Handler> inline void deallocate(void* p, std::size_t s, Handler& h) > asio_handler_deallocate(p, s, asio::detail::addressof(h)); > > [.\include\asio\detail\handler_alloc_helpers.hpp @ 73], (macro code) > ASIO_DEFINE_HANDLER_PTR(op), void reset() > asio_handler_alloc_helpers::deallocate(v, sizeof(op), *h); \ > > [.\include\asio\detail\win_iocp_socket_recv_op.hpp @ 89], static void > do_complete(io_service_impl* owner, operation* base, const asio::error_code& > result_ec, std::size_t bytes_transferred) > p.reset(); > > [.\include\asio\detail\win_iocp_operation.hpp @ 50], void destroy() > func_(0, this, asio::error_code(), 0); > > [.\include\asio\detail\impl\win_iocp_io_service.ipp @ 125], void > win_iocp_io_service::shutdown_service() > static_cast<win_iocp_operation*>(overlapped)->destroy(); > > [.\include\asio\detail\impl\service_registry.ipp @ 36], > service_registry::~service_registry() > service->shutdown_service(); > > [.\include\asio\impl\io_service.ipp @ 52], io_service::~io_service() > delete service_registry_; > > [...my own code] > > The variable 'elem' is not NULL, but it crashes on 'elem->value'. The crash > seems to happen whenever there are pending asio::async_read calls that were > not executed because I called io_service.stop(), but it is highly > timing-dependent. > > I am still working on a minimal example that does not drag along my entire > source code, but it's a hard to reproduce bug. > > Phew, that was a long post, thanks for making it here! I would appreciate > your help greatly. > > Regards, > Benjamin Richner > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Jeremy O. <je...@pl...> - 2017-07-29 22:13:22
|
Easiest thing to do is to compile with sanitizers or some other memory tracker to see where the fault happens. My guess is there's a bug in your code somewhere causing a double-free. On Sat, Jul 29, 2017 at 11:58 AM, Yuri Timenkov <yu...@ti...> wrote: > Hi Benjamin, > > > > io_service is the event loop which processes callbacks attached to async > operations. > > > > Sockets themselves don’t require io_service (mostly), but handlers attached > to async operations do because once I/O operation completes it posts handler > to io_service which already destroyed in your case (sockets and io_service > refer to each other by a plain reference, not a smart pointer). Even if you > close socket, corresponding handlers for all pending operations will be > executed with cancelled() error code (or similar). > > > > So safe way is to not call io_service::stop but rather destroy work object > and tell all pending operations to cancel (e.g. close or shutdown sockets, > cancel timers, etc). Only then when io_service::run gracefully returns it is > safe to destroy it. > > > > Calling io_service::stop() immediately stops processing any callbacks and > doesn’t wait for anything, you need to call io_service::run() again for safe > shutdown. > > > > There is no way to tell asio to not do anything, because any pending > operation is referenced from the kernel and the only way to cancel it is to > close or shutdown the socket and wait until corresponding handler is > executed. > > > > Regards, > > Yuri > > > > From: Benjamin Richner > Sent: den 29 juli 2017 15:36 > To: asi...@li... > Subject: [asio-users] Segfault in ~io_service() > > > > Hi everyone, > > I am experiencing a segmentation fault when the io_service is being > deleted (by means of going out of scope) and I am having trouble > tracking down the source of the problem. > > Now, before I delete the io_service, I > - delete the asio::io_service::work > - call io_service.stop(); > - let all my worker threads leave run() and return > - join all my worker threads in the main thread (my only remaining > thread running) > > Also worth pointing out, all my sockets are inside shared_ptr'd objects, > as is customary with asio, and at the point in time when ~io_service() > is called, none of these shared_ptr's should be around except those that > were bound with std::bind and then passed into calls like asio::async_read. > > I do NOT > - wait for completion of all my pending async calls (io_service.stop() > stops whenever it does, and all run() calls return, but there could > still be some pending work, but I don't want to execute it) > - close all sockets > > First question: Does the library require me to do those two things > before I call ~io_service()? > Theoretically, in ~io_service(), all the remaining shared pointers > should be cleaned up, and thus the sockets inside, and thus they should > be automatically closed on their destructor call, right? > > Now, the crash I am experiencing is very finicky - I cannot reproduce it > with a debugger, but I reproduced a stack trace by manual inspection and > with the help of Dr. Mingw. > > I am using > - Windows 10 > - g++ (i686-win32-dwarf-rev1, Built by MinGW-W64 project) 6.3.0 > - asio (standalone, asio-1.10.6.zip) as linked on > http://think-async.com/Asio/Download > > The following is the stack trace: > > [.\include\asio\detail\call_stack.hpp @ 108], static Value* top() > return elem ? elem->value_ : 0; > > [.\include\asio\implhandler_alloc_hook.ipp @ 66], void > asio_handler_deallocate(void* pointer, std::size_t size, ...) > thread_info::deallocate(call_stack::top(), pointer, size); > > [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template > <typename Handler> inline void deallocate(void* p, std::size_t s, > Handler& h) > asio_handler_deallocate(p, s, asio::detail::addressof(h)); > > [.\include\asio\impl\read.hpp @ 479], template <typename > AsyncReadStream, typename MutableBufferSequence, typename > CompletionCondition, typename ReadHandler> inline void > asio_handler_deallocate(void* pointer, std::size_t size, > read_op<AsyncReadStream, MutableBufferSequence, CompletionCondition, > ReadHandler>* this_handler) > asio_handler_alloc_helpers::deallocate( > > [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template > <typename Handler> inline void deallocate(void* p, std::size_t s, > Handler& h) > asio_handler_deallocate(p, s, asio::detail::addressof(h)); > > [.\include\asio\detail\handler_alloc_helpers.hpp @ 73], (macro code) > ASIO_DEFINE_HANDLER_PTR(op), void reset() > asio_handler_alloc_helpers::deallocate(v, sizeof(op), *h); \ > > [.\include\asio\detail\win_iocp_socket_recv_op.hpp @ 89], static void > do_complete(io_service_impl* owner, operation* base, const > asio::error_code& result_ec, std::size_t bytes_transferred) > p.reset(); > > [.\include\asio\detail\win_iocp_operation.hpp @ 50], void destroy() > func_(0, this, asio::error_code(), 0); > > [.\include\asio\detail\impl\win_iocp_io_service.ipp @ 125], void > win_iocp_io_service::shutdown_service() > static_cast<win_iocp_operation*>(overlapped)->destroy(); > > [.\include\asio\detail\impl\service_registry.ipp @ 36], > service_registry::~service_registry() > service->shutdown_service(); > > [.\include\asio\impl\io_service.ipp @ 52], io_service::~io_service() > delete service_registry_; > > [...my own code] > > The variable 'elem' is not NULL, but it crashes on 'elem->value'. The > crash seems to happen whenever there are pending asio::async_read calls > that were not executed because I called io_service.stop(), but it is > highly timing-dependent. > > I am still working on a minimal example that does not drag along my > entire source code, but it's a hard to reproduce bug. > > Phew, that was a long post, thanks for making it here! I would > appreciate your help greatly. > > Regards, > Benjamin Richner > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > |
From: Yuri T. <yu...@ti...> - 2017-07-29 19:21:09
|
Hi Benjamin, io_service is the event loop which processes callbacks attached to async operations. Sockets themselves don’t require io_service (mostly), but handlers attached to async operations do because once I/O operation completes it posts handler to io_service which already destroyed in your case (sockets and io_service refer to each other by a plain reference, not a smart pointer). Even if you close socket, corresponding handlers for all pending operations will be executed with cancelled() error code (or similar). So safe way is to not call io_service::stop but rather destroy work object and tell all pending operations to cancel (e.g. close or shutdown sockets, cancel timers, etc). Only then when io_service::run gracefully returns it is safe to destroy it. Calling io_service::stop() immediately stops processing any callbacks and doesn’t wait for anything, you need to call io_service::run() again for safe shutdown. There is no way to tell asio to not do anything, because any pending operation is referenced from the kernel and the only way to cancel it is to close or shutdown the socket and wait until corresponding handler is executed. Regards, Yuri From: Benjamin Richner<mailto:be...@bl...> Sent: den 29 juli 2017 15:36 To: asi...@li...<mailto:asi...@li...> Subject: [asio-users] Segfault in ~io_service() Hi everyone, I am experiencing a segmentation fault when the io_service is being deleted (by means of going out of scope) and I am having trouble tracking down the source of the problem. Now, before I delete the io_service, I - delete the asio::io_service::work - call io_service.stop(); - let all my worker threads leave run() and return - join all my worker threads in the main thread (my only remaining thread running) Also worth pointing out, all my sockets are inside shared_ptr'd objects, as is customary with asio, and at the point in time when ~io_service() is called, none of these shared_ptr's should be around except those that were bound with std::bind and then passed into calls like asio::async_read. I do NOT - wait for completion of all my pending async calls (io_service.stop() stops whenever it does, and all run() calls return, but there could still be some pending work, but I don't want to execute it) - close all sockets First question: Does the library require me to do those two things before I call ~io_service()? Theoretically, in ~io_service(), all the remaining shared pointers should be cleaned up, and thus the sockets inside, and thus they should be automatically closed on their destructor call, right? Now, the crash I am experiencing is very finicky - I cannot reproduce it with a debugger, but I reproduced a stack trace by manual inspection and with the help of Dr. Mingw. I am using - Windows 10 - g++ (i686-win32-dwarf-rev1, Built by MinGW-W64 project) 6.3.0 - asio (standalone, asio-1.10.6.zip) as linked on http://think-async.com/Asio/Download The following is the stack trace: [.\include\asio\detail\call_stack.hpp @ 108], static Value* top() return elem ? elem->value_ : 0; [.\include\asio\implhandler_alloc_hook.ipp @ 66], void asio_handler_deallocate(void* pointer, std::size_t size, ...) thread_info::deallocate(call_stack::top(), pointer, size); [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename Handler> inline void deallocate(void* p, std::size_t s, Handler& h) asio_handler_deallocate(p, s, asio::detail::addressof(h)); [.\include\asio\impl\read.hpp @ 479], template <typename AsyncReadStream, typename MutableBufferSequence, typename CompletionCondition, typename ReadHandler> inline void asio_handler_deallocate(void* pointer, std::size_t size, read_op<AsyncReadStream, MutableBufferSequence, CompletionCondition, ReadHandler>* this_handler) asio_handler_alloc_helpers::deallocate( [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename Handler> inline void deallocate(void* p, std::size_t s, Handler& h) asio_handler_deallocate(p, s, asio::detail::addressof(h)); [.\include\asio\detail\handler_alloc_helpers.hpp @ 73], (macro code) ASIO_DEFINE_HANDLER_PTR(op), void reset() asio_handler_alloc_helpers::deallocate(v, sizeof(op), *h); \ [.\include\asio\detail\win_iocp_socket_recv_op.hpp @ 89], static void do_complete(io_service_impl* owner, operation* base, const asio::error_code& result_ec, std::size_t bytes_transferred) p.reset(); [.\include\asio\detail\win_iocp_operation.hpp @ 50], void destroy() func_(0, this, asio::error_code(), 0); [.\include\asio\detail\impl\win_iocp_io_service.ipp @ 125], void win_iocp_io_service::shutdown_service() static_cast<win_iocp_operation*>(overlapped)->destroy(); [.\include\asio\detail\impl\service_registry.ipp @ 36], service_registry::~service_registry() service->shutdown_service(); [.\include\asio\impl\io_service.ipp @ 52], io_service::~io_service() delete service_registry_; [...my own code] The variable 'elem' is not NULL, but it crashes on 'elem->value'. The crash seems to happen whenever there are pending asio::async_read calls that were not executed because I called io_service.stop(), but it is highly timing-dependent. I am still working on a minimal example that does not drag along my entire source code, but it's a hard to reproduce bug. Phew, that was a long post, thanks for making it here! I would appreciate your help greatly. Regards, Benjamin Richner ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ asio-users mailing list asi...@li... https://lists.sourceforge.net/lists/listinfo/asio-users _______________________________________________ Using Asio? List your project at http://think-async.com/Asio/WhoIsUsingAsio |
From: Benjamin R. <be...@bl...> - 2017-07-29 13:35:58
|
Hi everyone, I am experiencing a segmentation fault when the io_service is being deleted (by means of going out of scope) and I am having trouble tracking down the source of the problem. Now, before I delete the io_service, I - delete the asio::io_service::work - call io_service.stop(); - let all my worker threads leave run() and return - join all my worker threads in the main thread (my only remaining thread running) Also worth pointing out, all my sockets are inside shared_ptr'd objects, as is customary with asio, and at the point in time when ~io_service() is called, none of these shared_ptr's should be around except those that were bound with std::bind and then passed into calls like asio::async_read. I do NOT - wait for completion of all my pending async calls (io_service.stop() stops whenever it does, and all run() calls return, but there could still be some pending work, but I don't want to execute it) - close all sockets First question: Does the library require me to do those two things before I call ~io_service()? Theoretically, in ~io_service(), all the remaining shared pointers should be cleaned up, and thus the sockets inside, and thus they should be automatically closed on their destructor call, right? Now, the crash I am experiencing is very finicky - I cannot reproduce it with a debugger, but I reproduced a stack trace by manual inspection and with the help of Dr. Mingw. I am using - Windows 10 - g++ (i686-win32-dwarf-rev1, Built by MinGW-W64 project) 6.3.0 - asio (standalone, asio-1.10.6.zip) as linked on http://think-async.com/Asio/Download The following is the stack trace: [.\include\asio\detail\call_stack.hpp @ 108], static Value* top() return elem ? elem->value_ : 0; [.\include\asio\implhandler_alloc_hook.ipp @ 66], void asio_handler_deallocate(void* pointer, std::size_t size, ...) thread_info::deallocate(call_stack::top(), pointer, size); [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename Handler> inline void deallocate(void* p, std::size_t s, Handler& h) asio_handler_deallocate(p, s, asio::detail::addressof(h)); [.\include\asio\impl\read.hpp @ 479], template <typename AsyncReadStream, typename MutableBufferSequence, typename CompletionCondition, typename ReadHandler> inline void asio_handler_deallocate(void* pointer, std::size_t size, read_op<AsyncReadStream, MutableBufferSequence, CompletionCondition, ReadHandler>* this_handler) asio_handler_alloc_helpers::deallocate( [.\include\asio\detailhandler_alloc_helpers.hpp @ 48], template <typename Handler> inline void deallocate(void* p, std::size_t s, Handler& h) asio_handler_deallocate(p, s, asio::detail::addressof(h)); [.\include\asio\detail\handler_alloc_helpers.hpp @ 73], (macro code) ASIO_DEFINE_HANDLER_PTR(op), void reset() asio_handler_alloc_helpers::deallocate(v, sizeof(op), *h); \ [.\include\asio\detail\win_iocp_socket_recv_op.hpp @ 89], static void do_complete(io_service_impl* owner, operation* base, const asio::error_code& result_ec, std::size_t bytes_transferred) p.reset(); [.\include\asio\detail\win_iocp_operation.hpp @ 50], void destroy() func_(0, this, asio::error_code(), 0); [.\include\asio\detail\impl\win_iocp_io_service.ipp @ 125], void win_iocp_io_service::shutdown_service() static_cast<win_iocp_operation*>(overlapped)->destroy(); [.\include\asio\detail\impl\service_registry.ipp @ 36], service_registry::~service_registry() service->shutdown_service(); [.\include\asio\impl\io_service.ipp @ 52], io_service::~io_service() delete service_registry_; [...my own code] The variable 'elem' is not NULL, but it crashes on 'elem->value'. The crash seems to happen whenever there are pending asio::async_read calls that were not executed because I called io_service.stop(), but it is highly timing-dependent. I am still working on a minimal example that does not drag along my entire source code, but it's a hard to reproduce bug. Phew, that was a long post, thanks for making it here! I would appreciate your help greatly. Regards, Benjamin Richner |
From: Thiago R. A. <thi...@gm...> - 2017-07-24 22:58:09
|
Hi all, I'm trying to compile opendnp3 on Cygwin (Windows 10 x64), which relies on the standalone header-only variant of ASIO. When I try to compile, it complains about W32_SOCKETS and says that I need to include -D__USE_W32_SOCKETS on the compiler options. When I add that to the CMakeLists.txt file, it goes a little bit further and then complains about not being able to convert 'long int*' to 'volatile int*' on winsock_init.ipp. What is the status on cygwin compatibility of the standalone version? Are there any "magic" pre-processor flags I can set to make it work? Thanks, Thiago Alves |
From: Robert B. <Rob...@di...> - 2017-07-23 19:29:26
|
Thanks Allen!! That works like a charm, so the server code is: struct server { tcp::acceptor acceptor_; const short port_; server(asio::io_context& io_context, short port) : acceptor_(io_context), port_(port) { start_accept(); } void start_accept() { tcp::endpoint endpoint(tcp::v4(), port_); acceptor_.open(endpoint.protocol()); acceptor_.bind(endpoint); acceptor_.listen(1); // accept 1 connection at a time acceptor_.async_accept([this](error_code ec, tcp::socket socket) { if (!ec) { make_shared<session>(move(socket), [&] { start_accept(); })->start(); acceptor_.close(); } else start_accept(); }); } }; with the lambda being called when session dies (client disconnects). Works perfectly!! Regards /R > -----Original Message----- > From: Allen [mailto:all...@gm...] > Sent: den 23 juli 2017 19:21 > To: asi...@li... > Subject: Re: [asio-users] Accept one connection only > > In that case, you probably have to close the acceptor as soon as one > connection is made, and then open it again when the connection is closed. In > other words, there is probably code in there that looks something like: > > boost::asio::ip::tcp::acceptor m_acceptor(m_io_service); > m_acceptor.open(endpoint.protocol()); > m_acceptor.bind(endpoint); > m_acceptor.listen(backlog); > m_acceptor.async_accept(m_socket, boost::bind(&Server::HandleAccept, > this, boost::asio::placeholders::error)); > > In your HandleAccept, you need to call: > > boost::system::error_code ec; > m_acceptor.close(ec); > > And then restart the whole thing (open, bind, listen, async_accept) after the > connection is closed. > > > On Sun, Jul 23, 2017 at 12:46 PM, Robert Bielik <Rob...@di...> > wrote: > > Tried that, i.e. only accepting the socket if zero connections are active, and > upon the disconnect call async_accept), but still I have no problem using > putty to connect to the server, with several instances. Granted, there is no > "connection" with the server, but I'd like putty to just fail directly with an > error. > > > > So that doesn't work, it seems... > > > > Regards > > /R > > > >> -----Original Message----- > >> From: Allen [mailto:all...@gm...] > >> Sent: den 23 juli 2017 18:36 > >> To: asi...@li... > >> Subject: Re: [asio-users] Accept one connection only > >> > >> that's what should happen if you call async_accept once, then don't > >> call it again until (a) after a connection is made and closed; or (b) > >> the async_accept handler is called with an error > >> > >> On Sun, Jul 23, 2017 at 12:24 PM, Robert Bielik > >> <Rob...@di...> > >> wrote: > >> > I'm using the TCP server example (with acceptor. async_accept), but > >> > I'd like > >> to modify it so that only a single connection should be possible. All > >> other connection attempts should result in an error. What need I do ? > >> > > >> > Regards > >> > /Robert > >> > > >> > > >> > ------------------------------------------------------------------- > >> > --- > >> > -------- Check out the vibrant tech community on one of the world's > >> > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot > >> > _______________________________________________ > >> > asio-users mailing list > >> > asi...@li... > >> > https://lists.sourceforge.net/lists/listinfo/asio-users > >> > _______________________________________________ > >> > Using Asio? List your project at > >> > http://think-async.com/Asio/WhoIsUsingAsio > >> > >> --------------------------------------------------------------------- > >> --------- Check out the vibrant tech community on one of the world's > >> most engaging tech sites, Slashdot.org! http://sdm.link/slashdot > >> _______________________________________________ > >> asio-users mailing list > >> asi...@li... > >> https://lists.sourceforge.net/lists/listinfo/asio-users > >> _______________________________________________ > >> Using Asio? List your project at > >> http://think-async.com/Asio/WhoIsUsingAsio > > > > ---------------------------------------------------------------------- > > -------- Check out the vibrant tech community on one of the world's > > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > _______________________________________________ > > asio-users mailing list > > asi...@li... > > https://lists.sourceforge.net/lists/listinfo/asio-users > > _______________________________________________ > > Using Asio? List your project at > > http://think-async.com/Asio/WhoIsUsingAsio > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most engaging > tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Allen <all...@gm...> - 2017-07-23 17:33:02
|
FYI, calling acceptor_.close() will throw an exception on error. In comparison, calling: boost::system::error_code ec; m_acceptor.close(ec); will return the error in ec instead of throwing an exception. On Sun, Jul 23, 2017 at 1:06 PM, Robert Bielik <Rob...@di...> wrote: > If, when accepting the first client, I call acceptor_.close() I get the behaviour I want, but after that I can't get the acceptor active again, i.e. even a single client connection fails... > > A bit like this example (https://stackoverflow.com/a/5260390/255635), though haven't gotten it to start accepting connections again... > > Regards > /R > >> -----Original Message----- >> From: Robert Bielik [mailto:Rob...@di...] >> Sent: den 23 juli 2017 18:47 >> To: asi...@li... >> Subject: Re: [asio-users] Accept one connection only >> >> Tried that, i.e. only accepting the socket if zero connections are active, and >> upon the disconnect call async_accept), but still I have no problem using >> putty to connect to the server, with several instances. Granted, there is no >> "connection" with the server, but I'd like putty to just fail directly with an >> error. >> >> So that doesn't work, it seems... >> >> Regards >> /R >> >> > -----Original Message----- >> > From: Allen [mailto:all...@gm...] >> > Sent: den 23 juli 2017 18:36 >> > To: asi...@li... >> > Subject: Re: [asio-users] Accept one connection only >> > >> > that's what should happen if you call async_accept once, then don't >> > call it again until (a) after a connection is made and closed; or (b) >> > the async_accept handler is called with an error >> > >> > On Sun, Jul 23, 2017 at 12:24 PM, Robert Bielik >> > <Rob...@di...> >> > wrote: >> > > I'm using the TCP server example (with acceptor. async_accept), but >> > > I'd like >> > to modify it so that only a single connection should be possible. All >> > other connection attempts should result in an error. What need I do ? >> > > >> > > Regards >> > > /Robert >> > > >> > > >> > > -------------------------------------------------------------------- >> > > -- >> > > -------- Check out the vibrant tech community on one of the world's >> > > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> > > _______________________________________________ >> > > asio-users mailing list >> > > asi...@li... >> > > https://lists.sourceforge.net/lists/listinfo/asio-users >> > > _______________________________________________ >> > > Using Asio? List your project at >> > > http://think-async.com/Asio/WhoIsUsingAsio >> > >> > ---------------------------------------------------------------------- >> > -------- Check out the vibrant tech community on one of the world's >> > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> > _______________________________________________ >> > asio-users mailing list >> > asi...@li... >> > https://lists.sourceforge.net/lists/listinfo/asio-users >> > _______________________________________________ >> > Using Asio? List your project at >> > http://think-async.com/Asio/WhoIsUsingAsio >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most engaging >> tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> asio-users mailing list >> asi...@li... >> https://lists.sourceforge.net/lists/listinfo/asio-users >> _______________________________________________ >> Using Asio? List your project at >> http://think-async.com/Asio/WhoIsUsingAsio > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Allen <all...@gm...> - 2017-07-23 17:21:08
|
In that case, you probably have to close the acceptor as soon as one connection is made, and then open it again when the connection is closed. In other words, there is probably code in there that looks something like: boost::asio::ip::tcp::acceptor m_acceptor(m_io_service); m_acceptor.open(endpoint.protocol()); m_acceptor.bind(endpoint); m_acceptor.listen(backlog); m_acceptor.async_accept(m_socket, boost::bind(&Server::HandleAccept, this, boost::asio::placeholders::error)); In your HandleAccept, you need to call: boost::system::error_code ec; m_acceptor.close(ec); And then restart the whole thing (open, bind, listen, async_accept) after the connection is closed. On Sun, Jul 23, 2017 at 12:46 PM, Robert Bielik <Rob...@di...> wrote: > Tried that, i.e. only accepting the socket if zero connections are active, and upon the disconnect call async_accept), but still I have no problem using putty to connect to the server, with several instances. Granted, there is no "connection" with the server, but I'd like putty to just fail directly with an error. > > So that doesn't work, it seems... > > Regards > /R > >> -----Original Message----- >> From: Allen [mailto:all...@gm...] >> Sent: den 23 juli 2017 18:36 >> To: asi...@li... >> Subject: Re: [asio-users] Accept one connection only >> >> that's what should happen if you call async_accept once, then don't call it >> again until (a) after a connection is made and closed; or (b) the async_accept >> handler is called with an error >> >> On Sun, Jul 23, 2017 at 12:24 PM, Robert Bielik <Rob...@di...> >> wrote: >> > I'm using the TCP server example (with acceptor. async_accept), but I'd like >> to modify it so that only a single connection should be possible. All other >> connection attempts should result in an error. What need I do ? >> > >> > Regards >> > /Robert >> > >> > >> > ---------------------------------------------------------------------- >> > -------- Check out the vibrant tech community on one of the world's >> > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> > _______________________________________________ >> > asio-users mailing list >> > asi...@li... >> > https://lists.sourceforge.net/lists/listinfo/asio-users >> > _______________________________________________ >> > Using Asio? List your project at >> > http://think-async.com/Asio/WhoIsUsingAsio >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most engaging >> tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> asio-users mailing list >> asi...@li... >> https://lists.sourceforge.net/lists/listinfo/asio-users >> _______________________________________________ >> Using Asio? List your project at >> http://think-async.com/Asio/WhoIsUsingAsio > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Robert B. <Rob...@di...> - 2017-07-23 17:06:57
|
If, when accepting the first client, I call acceptor_.close() I get the behaviour I want, but after that I can't get the acceptor active again, i.e. even a single client connection fails... A bit like this example (https://stackoverflow.com/a/5260390/255635), though haven't gotten it to start accepting connections again... Regards /R > -----Original Message----- > From: Robert Bielik [mailto:Rob...@di...] > Sent: den 23 juli 2017 18:47 > To: asi...@li... > Subject: Re: [asio-users] Accept one connection only > > Tried that, i.e. only accepting the socket if zero connections are active, and > upon the disconnect call async_accept), but still I have no problem using > putty to connect to the server, with several instances. Granted, there is no > "connection" with the server, but I'd like putty to just fail directly with an > error. > > So that doesn't work, it seems... > > Regards > /R > > > -----Original Message----- > > From: Allen [mailto:all...@gm...] > > Sent: den 23 juli 2017 18:36 > > To: asi...@li... > > Subject: Re: [asio-users] Accept one connection only > > > > that's what should happen if you call async_accept once, then don't > > call it again until (a) after a connection is made and closed; or (b) > > the async_accept handler is called with an error > > > > On Sun, Jul 23, 2017 at 12:24 PM, Robert Bielik > > <Rob...@di...> > > wrote: > > > I'm using the TCP server example (with acceptor. async_accept), but > > > I'd like > > to modify it so that only a single connection should be possible. All > > other connection attempts should result in an error. What need I do ? > > > > > > Regards > > > /Robert > > > > > > > > > -------------------------------------------------------------------- > > > -- > > > -------- Check out the vibrant tech community on one of the world's > > > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > > > asio-users mailing list > > > asi...@li... > > > https://lists.sourceforge.net/lists/listinfo/asio-users > > > _______________________________________________ > > > Using Asio? List your project at > > > http://think-async.com/Asio/WhoIsUsingAsio > > > > ---------------------------------------------------------------------- > > -------- Check out the vibrant tech community on one of the world's > > most engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > _______________________________________________ > > asio-users mailing list > > asi...@li... > > https://lists.sourceforge.net/lists/listinfo/asio-users > > _______________________________________________ > > Using Asio? List your project at > > http://think-async.com/Asio/WhoIsUsingAsio > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most engaging > tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |