asio-users Mailing List for asio C++ library (Page 6)
Brought to you by:
chris_kohlhoff
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(6) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
|
Feb
|
Mar
(1) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(7) |
Nov
(8) |
Dec
(11) |
2006 |
Jan
(11) |
Feb
(14) |
Mar
(8) |
Apr
(3) |
May
(27) |
Jun
(15) |
Jul
(43) |
Aug
(64) |
Sep
(19) |
Oct
(44) |
Nov
(69) |
Dec
(82) |
2007 |
Jan
(79) |
Feb
(153) |
Mar
(169) |
Apr
(148) |
May
(181) |
Jun
(114) |
Jul
(152) |
Aug
(104) |
Sep
(77) |
Oct
(128) |
Nov
(185) |
Dec
(215) |
2008 |
Jan
(100) |
Feb
(116) |
Mar
(115) |
Apr
(74) |
May
(152) |
Jun
(107) |
Jul
(117) |
Aug
(115) |
Sep
(141) |
Oct
(75) |
Nov
(31) |
Dec
(47) |
2009 |
Jan
(51) |
Feb
(65) |
Mar
(54) |
Apr
(60) |
May
(6) |
Jun
(107) |
Jul
(82) |
Aug
(133) |
Sep
(144) |
Oct
(11) |
Nov
(54) |
Dec
(26) |
2010 |
Jan
(30) |
Feb
(17) |
Mar
(93) |
Apr
(47) |
May
(93) |
Jun
(73) |
Jul
(32) |
Aug
(60) |
Sep
(59) |
Oct
(58) |
Nov
(71) |
Dec
(28) |
2011 |
Jan
(58) |
Feb
(65) |
Mar
(38) |
Apr
(83) |
May
(45) |
Jun
(70) |
Jul
(71) |
Aug
(7) |
Sep
(33) |
Oct
(65) |
Nov
(33) |
Dec
(16) |
2012 |
Jan
(13) |
Feb
(32) |
Mar
(30) |
Apr
(67) |
May
(57) |
Jun
(59) |
Jul
(8) |
Aug
(61) |
Sep
(48) |
Oct
(23) |
Nov
(29) |
Dec
(8) |
2013 |
Jan
(37) |
Feb
(20) |
Mar
(11) |
Apr
(11) |
May
(9) |
Jun
(26) |
Jul
(6) |
Aug
(18) |
Sep
(7) |
Oct
(29) |
Nov
(2) |
Dec
(17) |
2014 |
Jan
(11) |
Feb
(12) |
Mar
(6) |
Apr
(26) |
May
(17) |
Jun
(12) |
Jul
(12) |
Aug
|
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(7) |
2015 |
Jan
(3) |
Feb
(3) |
Mar
(7) |
Apr
(23) |
May
|
Jun
(1) |
Jul
(2) |
Aug
(33) |
Sep
(16) |
Oct
(2) |
Nov
(2) |
Dec
(13) |
2016 |
Jan
(7) |
Feb
(22) |
Mar
(11) |
Apr
|
May
(10) |
Jun
(2) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
(29) |
Nov
(3) |
Dec
(21) |
2017 |
Jan
(4) |
Feb
(31) |
Mar
(8) |
Apr
(4) |
May
(1) |
Jun
(4) |
Jul
(32) |
Aug
(28) |
Sep
(2) |
Oct
(11) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(10) |
Oct
|
Nov
(4) |
Dec
(20) |
2019 |
Jan
(3) |
Feb
|
Mar
(1) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2020 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
(2) |
May
(1) |
Jun
|
Jul
(11) |
Aug
(2) |
Sep
(6) |
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2023 |
Jan
(3) |
Feb
|
Mar
(5) |
Apr
(3) |
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Vinnie F. <vin...@gm...> - 2017-08-15 14:27:44
|
On Tue, Aug 15, 2017 at 7:19 AM, Vinnie Falco <vin...@gm...> wrote: >... Sorry, that post has a small inaccuracy. > io_service::strand()'s dspatch mechanism is intelligent: if the caller > is already on the strand then it performs an efficient continuation > instead of going through the dispatch mechanism. You can inspect that > code here: This is the correct link: <https://github.com/boostorg/asio/blob/b002097359f246b7b1478775251dfb153ab3ff4b/include/boost/asio/detail/impl/strand_service.hpp#L58> io_service::strand has overhead, no question. A popular alternative is to just call io_service::run() from a single thread then you don't need a strand. If you want to take advantage of multiple cores, use multiple instances of io_service each with its own single thread calling run(). This greatly simplifies code, no need to worry about concurrency. You just have to keep the objects balanced with respect to the number of connections they are handling at once. Thanks |
From: Vinnie F. <vin...@gm...> - 2017-08-15 14:19:10
|
On Mon, Aug 14, 2017 at 11:47 PM, Aaron Koolen-Bourke <aar...@gm...> wrote: > If ws.async_read_some is called on ws, each part of that chain of streams is > going to have to call to the next layer to perform the read, and each of > those calls will have a completion handler. Each of those completion > handlers will be called in the strand. If any of those streams do multiple > operations to fulfil the async_read_some then it will be calling back into > it's composed operation within the strand. Have I got that right? Nope. Only the first completion handler is invoked using the strand mechanism. I'm talking about the deepest stream in the chain, corresponding to the object which actually peforms network I/O. This is always derived from asio::basic_io_object. An example would be asio::ip::tcp::socket. Subsequent completion handlers are simply called directly, since the composed operation already knows it is in the right context. For example: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/include/boost/beast/http/impl/write.ipp#L170> In the code above, `h_(ec)` invokes the final handler. Note that there is no call to asio_handler_invoke. The handler is simply called directly. This is because the composed operation knows that at the point when it makes the upcall (call the final handler) it has already been dispatched to the proper context. That is the purpose of these lines: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/include/boost/beast/http/impl/write.ipp#L138> <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/include/boost/beast/http/impl/write.ipp#L150> io_service::post() puts the composed operation in the proper context (because the composed operation provides the proper asio_handler_invoke hooking mechanism on line 115). io_service::strand()'s dspatch mechanism is intelligent: if the caller is already on the strand then it performs an efficient continuation instead of going through the dispatch mechanism. You can inspect that code here: <https://github.com/boostorg/asio/blob/b002097359f246b7b1478775251dfb153ab3ff4b/include/boost/asio/detail/impl/strand_service.ipp#L94> When you have nested streams that also have composed operations, all of the types are visible to the compiler. The compiler can "see through" the entire sequence of calls for invoking the completion handler and perform significant optimizations. Usually it all collapses down to one function call. This is explained in N3747, page 7: <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3747.pdf> You really should try writing a small program or using an existing program (Beast example maybe?) and set a breakpoint inside a composed operation and step through the series of calls before, during, and after the invocation of the final handler. This way you will gain a perfect understanding of what the Asio implementation is doing. Thanks |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-15 06:48:14
|
> > No. Along the chain of initiating functions you have calls to > std::move() which should be inexpensive (relative to the cost of an > actual I/O). Depending on the initiating function it might be to a > dynamically allocated piece of memory (for composed operations that > have non-trivial state, such as some of Beat's operations). The strand > queuing mechanism is not invoked until an intermediate completion > handler needs to be called, or when the final handler is called. Sorry if I wasn't clear. I was meaning the functions (async_read_some) of each layer will result in a strand based completion. Using your example: beast::websocket::stream< beast::buffered_read_stream< ssl::stream<beast::test::stream>>> ws{ios, ctx}; If ws.async_read_some is called on ws, each part of that chain of streams is going to have to call to the next layer to perform the read, and each of those calls will have a completion handler. Each of those completion handlers will be called in the strand. If any of those streams do multiple operations to fulfil the async_read_some then it will be calling back into it's composed operation within the strand. Have I got that right? I hope I have provided some useful information! Certainly. I might not get exactly what I want but I think I can work with it for now. Cheers On Tue, Aug 15, 2017 at 5:11 PM, Vinnie Falco <vin...@gm...> wrote: > On Mon, Aug 14, 2017 at 9:30 PM, Aaron Koolen-Bourke > <aar...@gm...> wrote: > > Just for the record, I am using Networking TS where asio_handler_invoke > is > > gone. However, I will likely end up at this point with a special executor > > which seems to be the replacement. > > The return value of get_executor() fulfills the same role as > asio_handler_invoke(). > > > I have read that document (Maybe I need to read again) and sure it's > better > > than existing futures but it doesn't explicitly go into detail about > > mechanisms for strands. The code uses various checks, a mutex etc to > manage > > the FIFO nature. Understandable, but this would be for every node in the > > chain would it not? > > No. Along the chain of initiating functions you have calls to > std::move() which should be inexpensive (relative to the cost of an > actual I/O). Depending on the initiating function it might be to a > dynamically allocated piece of memory (for composed operations that > have non-trivial state, such as some of Beat's operations). The strand > queuing mechanism is not invoked until an intermediate completion > handler needs to be called, or when the final handler is called. > > > Also I'm not sure if the networking TS does handler allocs like > > asio_handler_allocate (I haven't looking into the code in-depth yet) > > The TS implementation performs an identical set of steps as the > current Boost.Asio, it just spells asio_handler_allocate as > get_associated_allocator(). Which is a great cosmetic improvement of > course, but does not change the runtime behavior. > > > We are typically sensitive to heap allocations but they can typically be > > solved with pools and various memory allocation schemes so I'm not overly > > concerned just yet. > > I think that's a good posture to take. Here's an example of an > allocator which wraps the completion handler and does a pretty good > job of using pre-allocated memory. It is transparent, just call wrap() > around your completion handler at call sites and use the allocator in > any AllocatorAware containers, e.g. std::vector or your own classes: > > <https://github.com/boostorg/beast/blob/develop/example/ > common/session_alloc.hpp#L174> > > I will port it to Net-TS when Boost.Asio is updated this year, its not > hard to modify for Net-TS since it already meets the requirements of > Allocator. > > > Evidence is great, but each problem can be unique and one person's idea > of > > fast is not necessarily another's.We build internet gateway software and > > this re-architecture is focusing quite heavily on scalability and > latency, > > so we need to investigate and make educated choices. > > > > As usual, thanks for taking the time to answer my questions > > I hope I have provided some useful information! > > Thanks > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > |
From: Vinnie F. <vin...@gm...> - 2017-08-15 05:12:06
|
On Mon, Aug 14, 2017 at 9:30 PM, Aaron Koolen-Bourke <aar...@gm...> wrote: > Just for the record, I am using Networking TS where asio_handler_invoke is > gone. However, I will likely end up at this point with a special executor > which seems to be the replacement. The return value of get_executor() fulfills the same role as asio_handler_invoke(). > I have read that document (Maybe I need to read again) and sure it's better > than existing futures but it doesn't explicitly go into detail about > mechanisms for strands. The code uses various checks, a mutex etc to manage > the FIFO nature. Understandable, but this would be for every node in the > chain would it not? No. Along the chain of initiating functions you have calls to std::move() which should be inexpensive (relative to the cost of an actual I/O). Depending on the initiating function it might be to a dynamically allocated piece of memory (for composed operations that have non-trivial state, such as some of Beat's operations). The strand queuing mechanism is not invoked until an intermediate completion handler needs to be called, or when the final handler is called. > Also I'm not sure if the networking TS does handler allocs like > asio_handler_allocate (I haven't looking into the code in-depth yet) The TS implementation performs an identical set of steps as the current Boost.Asio, it just spells asio_handler_allocate as get_associated_allocator(). Which is a great cosmetic improvement of course, but does not change the runtime behavior. > We are typically sensitive to heap allocations but they can typically be > solved with pools and various memory allocation schemes so I'm not overly > concerned just yet. I think that's a good posture to take. Here's an example of an allocator which wraps the completion handler and does a pretty good job of using pre-allocated memory. It is transparent, just call wrap() around your completion handler at call sites and use the allocator in any AllocatorAware containers, e.g. std::vector or your own classes: <https://github.com/boostorg/beast/blob/develop/example/common/session_alloc.hpp#L174> I will port it to Net-TS when Boost.Asio is updated this year, its not hard to modify for Net-TS since it already meets the requirements of Allocator. > Evidence is great, but each problem can be unique and one person's idea of > fast is not necessarily another's.We build internet gateway software and > this re-architecture is focusing quite heavily on scalability and latency, > so we need to investigate and make educated choices. > > As usual, thanks for taking the time to answer my questions I hope I have provided some useful information! Thanks |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-15 04:31:39
|
> > That mechanism already exists, it is called asio_handler_invoke and it > is a "customization point" Just for the record, I am using Networking TS where asio_handler_invoke is gone. However, I will likely end up at this point with a special executor which seems to be the replacement. of nested streams is given in "A Universal Model for Asynchronous > Operations" written by the Asio author (Christopher Kohlhoff) in N3747 > <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3747.pdf> I have read that document (Maybe I need to read again) and sure it's better than existing futures but it doesn't explicitly go into detail about mechanisms for strands. The code uses various checks, a mutex etc to manage the FIFO nature. Understandable, but this would be for every node in the chain would it not? Have you done any performance measurements? No, but when investigating library candidates and reworking of a framework those details are of interest; it can change how one approaches a problem. I am not keen on rewriting a framework twice because it's later discovered some detail is unacceptable. > So what? Are you complaining about the need for heap allocations for > an event that occurs only a few times for the lifetime of a > connection? Have you measured the impact on performance? Are you aware > that every time you call an asynchronous initiating function, Asio > calls asio_handler_allocate to allocate memory to store the handler? > How much does calling the allocation function a few more times for the > lifetime of a connection impact performance measurably if at all? I wouldn't say I'm complaining, just investigating and raising concerns. Also I'm not sure if the networking TS does handler allocs like asio_handler_allocate (I haven't looking into the code in-depth yet) We are typically sensitive to heap allocations but they can typically be solved with pools and various memory allocation schemes so I'm not overly concerned just yet. It sounds to me like you 1. already have a design in mind, and 2. have > made unfounded performance assumptions about other approaches. I would > suggest a more evidence-based approach, and being open-minded to > established practice which comes from available, working examples. I have no fixed design yet but the more I investigate the more solutions are presenting themselves. I've said before that part of our requirements may be solvable in a different place, outside of the transport layer. Evidence is great, but each problem can be unique and one person's idea of fast is not necessarily another's.We build internet gateway software and this re-architecture is focusing quite heavily on scalability and latency, so we need to investigate and make educated choices. As usual, thanks for taking the time to answer my questions On Tue, Aug 15, 2017 at 3:24 PM, Vinnie Falco <vin...@gm...> wrote: > On Mon, Aug 14, 2017 at 8:08 PM, Aaron Koolen-Bourke > <aar...@gm...> wrote: > > If SSL requires that you have to use it in a strand or else it breaks > (And > > it's running asynchronously), why shouldn't it deal with that itself > > "Separation of concerns" > <https://en.wikipedia.org/wiki/Separation_of_concerns> > > SSL does not require that you use a strand, it only requires that you > use an appropriate technique to prevent the stream object from being > accessed in a way that breaks invariants. There are several ways to do > this: > > 1. Only call io_service::run from a single thread > 2. Use an explicit strand of type io_service::strand > 3. Use a user-defined type and dispatching mechanism > > For 3 this is accomplished with suitable overloads of > asio_handler_invoke for the user defined type. > > Number 1 is popular, and if you create a separate io_service for each > thread (distributing the streams evenly between the set of io_service > objects) you can get high performance without the need for explicit > strands. > > My point is that your idea to push the responsibility of safe access > on to ssl::stream prevents choices 1 and 3 above. > > > (or have some mechanism for supporting it within the SSL stream). > > That mechanism already exists, it is called asio_handler_invoke and it > is a "customization point" > > <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_ > asio/reference/asio_handler_invoke.html> > <http://ericniebler.com/2014/10/21/customization-point- > design-in-c11-and-beyond/> > > > if you have a layer of streams you need to have the strand at the client > > call and suffer the performance overhead at every place in the chain just > > because one step in the chain requires it. > > That is incorrect, the overhead is the same no matter the number of > streams. The performance characteristic you described, of decreasing > performance per wrapper, is more applicable to std::future based > completion notification mechanisms. Completion handlers were designed > to be composable in precisely the manner I outlined in my 4-wrapper > example. A full explanation of performance for completion notification > of nested streams is given in "A Universal Model for Asynchronous > Operations" written by the Asio author (Christopher Kohlhoff) in N3747 > > <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3747.pdf> > > > some thought about chaining and performance would have gone a long way. > > Have you done any performance measurements? > > > I'm not entirely sure of what the best solution is (type traited system > > to chose wrappers for each party in the chain?) but certainly what it > > is now, for the use case I have, it does seem to be troublesome. > > Why? > > > Your example of 4 streams is nice but in reality the stream (in my case) > is > > built up over time. I don't know the full chain right at connection > start. > > Some layers may be added/inserted and some may be removed. > > So? Use move-construction to assemble new streams from existing ones. > I do this in the examples I linked. Here, an HTTP session is turned > into a WebSocket sesison: > > <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b3 > 1a085496aa/example/advanced/server/advanced_server.cpp#L532> > > SSL-capable version: > <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b3 > 1a085496aa/example/advanced/server-flex/advanced_server_flex.cpp#L718> > > To get around boost::asio::ssl::stream limitation on move > construction, I have created my own movable ssl::stream wrapper: > > <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b3 > 1a085496aa/example/common/ssl_stream.hpp#L23> > > The wrapper will not be necessary once Boost.Asio is updated. > > > Even without that complexity, one would start with a socket, upon SSL > > detection create a new connection (more heap allocs and moving of > members), > > then after CONNECT there might be another SSL for inspection, so another > > connection creation with another layered type. > > So what? Are you complaining about the need for heap allocations for > an event that occurs only a few times for the lifetime of a > connection? Have you measured the impact on performance? Are you aware > that every time you call an asynchronous initiating function, Asio > calls asio_handler_allocate to allocate memory to store the handler? > How much does calling the allocation function a few more times for the > lifetime of a connection impact performance measurably if at all? > > > I realise there's many ways to solve this problem and a pure data > pipeline > > (Outside of the actual socket/stream) might be best for some of this, > > It sounds to me like you 1. already have a design in mind, and 2. have > made unfounded performance assumptions about other approaches. I would > suggest a more evidence-based approach, and being open-minded to > established practice which comes from available, working examples. > > Thanks > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > |
From: Vinnie F. <vin...@gm...> - 2017-08-15 03:24:19
|
On Mon, Aug 14, 2017 at 8:08 PM, Aaron Koolen-Bourke <aar...@gm...> wrote: > If SSL requires that you have to use it in a strand or else it breaks (And > it's running asynchronously), why shouldn't it deal with that itself "Separation of concerns" <https://en.wikipedia.org/wiki/Separation_of_concerns> SSL does not require that you use a strand, it only requires that you use an appropriate technique to prevent the stream object from being accessed in a way that breaks invariants. There are several ways to do this: 1. Only call io_service::run from a single thread 2. Use an explicit strand of type io_service::strand 3. Use a user-defined type and dispatching mechanism For 3 this is accomplished with suitable overloads of asio_handler_invoke for the user defined type. Number 1 is popular, and if you create a separate io_service for each thread (distributing the streams evenly between the set of io_service objects) you can get high performance without the need for explicit strands. My point is that your idea to push the responsibility of safe access on to ssl::stream prevents choices 1 and 3 above. > (or have some mechanism for supporting it within the SSL stream). That mechanism already exists, it is called asio_handler_invoke and it is a "customization point" <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/asio_handler_invoke.html> <http://ericniebler.com/2014/10/21/customization-point-design-in-c11-and-beyond/> > if you have a layer of streams you need to have the strand at the client > call and suffer the performance overhead at every place in the chain just > because one step in the chain requires it. That is incorrect, the overhead is the same no matter the number of streams. The performance characteristic you described, of decreasing performance per wrapper, is more applicable to std::future based completion notification mechanisms. Completion handlers were designed to be composable in precisely the manner I outlined in my 4-wrapper example. A full explanation of performance for completion notification of nested streams is given in "A Universal Model for Asynchronous Operations" written by the Asio author (Christopher Kohlhoff) in N3747 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3747.pdf> > some thought about chaining and performance would have gone a long way. Have you done any performance measurements? > I'm not entirely sure of what the best solution is (type traited system > to chose wrappers for each party in the chain?) but certainly what it > is now, for the use case I have, it does seem to be troublesome. Why? > Your example of 4 streams is nice but in reality the stream (in my case) is > built up over time. I don't know the full chain right at connection start. > Some layers may be added/inserted and some may be removed. So? Use move-construction to assemble new streams from existing ones. I do this in the examples I linked. Here, an HTTP session is turned into a WebSocket sesison: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/advanced/server/advanced_server.cpp#L532> SSL-capable version: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/advanced/server-flex/advanced_server_flex.cpp#L718> To get around boost::asio::ssl::stream limitation on move construction, I have created my own movable ssl::stream wrapper: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/common/ssl_stream.hpp#L23> The wrapper will not be necessary once Boost.Asio is updated. > Even without that complexity, one would start with a socket, upon SSL > detection create a new connection (more heap allocs and moving of members), > then after CONNECT there might be another SSL for inspection, so another > connection creation with another layered type. So what? Are you complaining about the need for heap allocations for an event that occurs only a few times for the lifetime of a connection? Have you measured the impact on performance? Are you aware that every time you call an asynchronous initiating function, Asio calls asio_handler_allocate to allocate memory to store the handler? How much does calling the allocation function a few more times for the lifetime of a connection impact performance measurably if at all? > I realise there's many ways to solve this problem and a pure data pipeline > (Outside of the actual socket/stream) might be best for some of this, It sounds to me like you 1. already have a design in mind, and 2. have made unfounded performance assumptions about other approaches. I would suggest a more evidence-based approach, and being open-minded to established practice which comes from available, working examples. Thanks |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-15 03:09:31
|
> > Why should the algorithm for implementing SSL over TCP/IP have to know > about the concurrency model chosen by the application? The benefit of > Asio's design is that it decouples that logic. You want to bring them > back together? I don't see that as particularly desirable. If SSL requires that you have to use it in a strand or else it breaks (And it's running asynchronously), why shouldn't it deal with that itself (or have some mechanism for supporting it within the SSL stream). As it stands, if you have a layer of streams you need to have the strand at the client call and suffer the performance overhead at every place in the chain just because one step in the chain requires it. ssl streams support asynchronous operations and layering so some thought about chaining and performance would have gone a long way. I'm not entirely sure of what the best solution is (type traited system to chose wrappers for each party in the chain?) but certainly what it is now, for the use case I have, it does seem to be troublesome. Your example of 4 streams is nice but in reality the stream (in my case) is built up over time. I don't know the full chain right at connection start. Some layers may be added/inserted and some may be removed. Even without that complexity, one would start with a socket, upon SSL detection create a new connection (more heap allocs and moving of members), then after CONNECT there might be another SSL for inspection, so another connection creation with another layered type. I realise there's many ways to solve this problem and a pure data pipeline (Outside of the actual socket/stream) might be best for some of this, so don't think I'm asking for someone to write my software. I just saw that there was layering support in ASIO for ssl and was really hoping it could provide a high speed flexible and dynamic pipeline. Thanks On Tue, Aug 15, 2017 at 9:31 AM, Vinnie Falco <vin...@gm...> wrote: > On Mon, Aug 14, 2017 at 1:58 PM, Aaron Koolen-Bourke > <aar...@gm...> wrote: > > it's a shame that there isn't a solution built into ssl::stream somehow > > Why should the algorithm for implementing SSL over TCP/IP have to know > about the concurrency model chosen by the application? The benefit of > Asio's design is that it decouples that logic. You want to bring them > back together? I don't see that as particularly desirable. > > > unsolvable...I imagine, is the concept of layered > > mutating data pipelines which is really my original goal. > > Lots of applications do layering, but they use the CRTP approach I > originally outlined. For example, here's a pipeline of 4 streams: > > boost::asio::io_service ios; > boost::asio::ssl::context ctx; > beast::websocket::stream< > beast::buffered_read_stream< > ssl::stream<beast::test::stream>>> ws{ios, ctx}; > > I layer streams all the time in the Beast unit tests, it works great. > > This author is layering Apple's HomeKit on top of Beast HTTP using > asio SSL stream on TCP/IP socket: > <https://github.com/djarek/gabia> > > Another author is layering Bredis on tcp::socket or ss::stream<tcp::socket> > https://github.com/basiliscos/cpp-bredis > > This approach will work for you as well. > > Thanks > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > |
From: Vinnie F. <vin...@gm...> - 2017-08-14 21:31:30
|
On Mon, Aug 14, 2017 at 1:58 PM, Aaron Koolen-Bourke <aar...@gm...> wrote: > it's a shame that there isn't a solution built into ssl::stream somehow Why should the algorithm for implementing SSL over TCP/IP have to know about the concurrency model chosen by the application? The benefit of Asio's design is that it decouples that logic. You want to bring them back together? I don't see that as particularly desirable. > unsolvable...I imagine, is the concept of layered > mutating data pipelines which is really my original goal. Lots of applications do layering, but they use the CRTP approach I originally outlined. For example, here's a pipeline of 4 streams: boost::asio::io_service ios; boost::asio::ssl::context ctx; beast::websocket::stream< beast::buffered_read_stream< ssl::stream<beast::test::stream>>> ws{ios, ctx}; I layer streams all the time in the Beast unit tests, it works great. This author is layering Apple's HomeKit on top of Beast HTTP using asio SSL stream on TCP/IP socket: <https://github.com/djarek/gabia> Another author is layering Bredis on tcp::socket or ss::stream<tcp::socket> https://github.com/basiliscos/cpp-bredis This approach will work for you as well. Thanks |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-14 20:58:36
|
> > How would the ssl::stream know that there's an implicit strand? That > would be annoying, then ssl::stream would have to pay for strands even > for the case where there's only one thread calling io_service::run. Well I would have just used asio::crystal_ball :). So, fair enough, but it's a shame that there isn't a solution built into ssl::stream somehow (Maybe it was looked at and deemed unsolvable within the ASIO scope) because SSL upgrade is just a tad common. Also, I imagine, is the concept of layered mutating data pipelines which is really my original goal. Cheers On Mon, 14 Aug 2017 at 18:16 Vinnie Falco <vin...@gm...> wrote: > On Sun, Aug 13, 2017 at 10:15 PM, Aaron Koolen-Bourke > <aar...@gm...> wrote: > > Even though you say the read_some and write_some > > operations each read and write, isn't it the job of the stream to ensure > > they are stranded by wrapping them the appropriate executor? > > How would the ssl::stream know that there's an implicit strand? That > would be annoying, then ssl::stream would have to pay for strands even > for the case where there's only one thread calling io_service::run. > > > Also, looking at your example, your stranding is in the base class. I > assume > > that's for convenience > > Right! > > Thanks > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > |
From: Vinnie F. <vin...@gm...> - 2017-08-14 06:16:46
|
On Sun, Aug 13, 2017 at 10:15 PM, Aaron Koolen-Bourke <aar...@gm...> wrote: > Even though you say the read_some and write_some > operations each read and write, isn't it the job of the stream to ensure > they are stranded by wrapping them the appropriate executor? How would the ssl::stream know that there's an implicit strand? That would be annoying, then ssl::stream would have to pay for strands even for the case where there's only one thread calling io_service::run. > Also, looking at your example, your stranding is in the base class. I assume > that's for convenience Right! Thanks |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-14 05:16:18
|
Hi Vinnie. I've been thinking about the strands for ssl::streams and it's still bugging me. Even though you say the read_some and write_some operations each read and write, isn't it the job of the stream to ensure they are stranded by wrapping them the appropriate executor? Then the final handoff can be to whatever completion handler the caller passed. The client code itself, if 1/2 duplex shouldn't really have to bother I would have thought. Also, looking at your example, your stranding is in the base class. I assume that's for convenience as it looks like as it stands in the example you can't have stranded and non-stranded "do_read_header" functions that shared code from the base short of exposing some strand/executor getter in the implementations. Cheers On Mon, Aug 14, 2017 at 9:44 AM, Aaron Koolen-Bourke <aar...@gm...> wrote: > > The composed operations for ssl::stream's async_read_some and >> async_write_some functions perform both reading and writing, so a >> strand is needed even when the application level protocol is >> half-duplex. >> > > OK, gotcha, thanks. > > Regarding this statement: > >> It sounds like you need two separate SSL streams, each with their own >> state. I don't see how changing the style of interface avoids that. > > > Yes we do need two streams, one SSL working over the other. I was just > expecting to have a ssl::stream<ssl::stream<tcp::socket>>. Is that what > you meant or is there better way to achieve this with ASIO? > > The concrete interface wouldn't avoid two streams but it meant I could (at > runtime) build a chain of streams/filters and not have to recreate their > owners (connection objects) if I added another layer. I should say we have > an SSL implementation in our legacy code but we were hoping to use ASIO's > version in our layering as it's already async friendly and built for ASIO. > I might have to end up having to build my own "filtering_stream" that > passes data through the chain with one of those chains being our own > homegrown SSL implementation, potentially more than once. As I said I was > really quite keen on not having to reinvent that sort of thing when it's > "almost" there with ASIO. > > Cheers > > > > > > > > > |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-13 21:44:48
|
> The composed operations for ssl::stream's async_read_some and > async_write_some functions perform both reading and writing, so a > strand is needed even when the application level protocol is > half-duplex. > OK, gotcha, thanks. Regarding this statement: > It sounds like you need two separate SSL streams, each with their own > state. I don't see how changing the style of interface avoids that. Yes we do need two streams, one SSL working over the other. I was just expecting to have a ssl::stream<ssl::stream<tcp::socket>>. Is that what you meant or is there better way to achieve this with ASIO? The concrete interface wouldn't avoid two streams but it meant I could (at runtime) build a chain of streams/filters and not have to recreate their owners (connection objects) if I added another layer. I should say we have an SSL implementation in our legacy code but we were hoping to use ASIO's version in our layering as it's already async friendly and built for ASIO. I might have to end up having to build my own "filtering_stream" that passes data through the chain with one of those chains being our own homegrown SSL implementation, potentially more than once. As I said I was really quite keen on not having to reinvent that sort of thing when it's "almost" there with ASIO. Cheers |
From: Vinnie F. <vin...@gm...> - 2017-08-11 22:01:19
|
On Fri, Aug 11, 2017 at 2:54 PM, Vinnie Falco <vin...@gm...> wrote: > The composed operations for ssl::stream's async_read_some and > async_write_some functions perform both reading and writing, so a > strand is needed even when the application level protocol is > half-duplex. This statement was not correctly worded, here is the edited version: The composed operations for ssl::stream's async_read_some and async_write_some functions each[*] perform both reading and writing, so a strand is needed even when the application level protocol is half-duplex. [*] added Thanks |
From: Vinnie F. <vin...@gm...> - 2017-08-11 21:54:43
|
On Fri, Aug 11, 2017 at 2:44 PM, Aaron Koolen-Bourke <aar...@gm...> wrote: > BTW: Small aside. Looking through your flex example briefly, I notice you > wrap everything in a strand, however HTTP being a half duplex protocol > doesn't really need that. What's your reason there? I might have missed > something in the code. The composed operations for ssl::stream's async_read_some and async_write_some functions perform both reading and writing, so a strand is needed even when the application level protocol is half-duplex. The reason I include the strand even in non-ssl servers is because users have an overwhelming tendency to copy and paste the examples. The strand is an extra layer of protection in case they change the code to access the stream objects in an unsafe fashion. Sophisticated users will know they don't need the strand, and unsophisticated users won't notice the minor performance penalty :) Thanks |
From: Andrey G. <and...@gm...> - 2017-08-11 21:47:52
|
> And if you type-erase the completion handler then you will break invariants. For example if you type-erase a strand-wrapped completion handler, it will no longer be dispatched to the strand. I've come up with this class for type-erasing asio streams once https://gist.github.com/andrusha97/98992a133f9bf186010c979913c052e3 Doesn't it solve this problem? On Fri, Aug 11, 2017 at 5:19 PM, Vinnie Falco <vin...@gm...> wrote: > On Fri, Aug 11, 2017 at 2:45 AM, Aaron Koolen-Bourke > <aar...@gm...> wrote: >> So in effect we'd be creating and destroying up to 3 session objects per >> client connection. We really wanted to avoid this if we can, especially if >> the session object starts to get heavyweight. > > It sounds like you need two separate SSL streams, each with their own > state. I don't see how changing the style of interface avoids that. > >> I feel that the solution is going to be custom wrappers around all the >> asio::socket and ssl::stream objects so we can treat them with a >> standardised interface > > asio::socket and ssl::stream already have "standardised interfaces" > called AsyncReadStream and AsyncWriteStream (plus their synchronous > equivalents): > > <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/AsyncReadStream.html> > <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/AsyncWriteStream.html> > >> unfortunately this would mean we'd need to have >> concrete versions of an async_read_some etc with concrete completion token >> (likely std::function) and buffer types. My boss is preferring this method >> for flexibility of adding whatever layer we want > > You are describing type-erasure of the underlying stream. I see only > downsides to this, especially with respect to performance, since > type-erasing buffer sequences usually means a dynamic allocation (e.g. > `std::vector<asio::const_buffer>`). And if you type-erase the > completion handler then you will break invariants. For example if you > type-erase a strand-wrapped completion handler, it will no longer be > dispatched to the strand. > >> I wanted to come up >> with some way to avoid it so we can don't have to have this custom stuff and >> concrete behaviour all through our software. > > The obvious solution is to write your algorithm as functions and > classes templated on the AsyncStream, and use only what the concepts > allow. For the parts which don't fit the concept (such as performing > the SSL shutdown handshake, or doing the TCP/IP shutdown on a plain > socket) I have provided an example using the Curiously Recurring > Template Pattern for how that may be achieved. > > I have gone down the road of type-erasing streams, completion > handlers, and buffers, and I strongly advise against it due to the > performance and maintenance problems it causes. > > Thanks > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-11 21:45:23
|
Hi Vinnie It sounds like you need two separate SSL streams, each with their own > state. I don't see how changing the style of interface avoids that. > Absolutely, that's what we need and potentially more filters in between. Where the CRTP becomes unwieldy is that in the session class (which is a ssl_session already) we would have to create another ssl_session with a template argument for AsyncReadStream being ssl::stream<AsyncReadStream> where AsyncReadStream is already ssl::stream<socket>. So there's two constructions of this object already. It is compounded if due to some business logic another layer needed to be place on top. Then the session would need recreating yet again. It seems rather heavyhanded for what is really meant to be effectively a data pipeline and nothing really relating to the logic of the session. > > asio::socket and ssl::stream already have "standardised interfaces" > called AsyncReadStream and AsyncWriteStream (plus their synchronous > equivalents): > > <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/ > AsyncReadStream.html> > <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/ > AsyncWriteStream.html> > Yes but they aren't enforced through inheritance. Although inheritance itself wouldn't help with templated virtual functions which can't happen. > You are describing type-erasure of the underlying stream. I see only > downsides to this, especially with respect to performance, since > type-erasing buffer sequences usually means a dynamic allocation (e.g. > `std::vector<asio::const_buffer>`). And if you type-erase the > completion handler then you will break invariants. For example if you > type-erase a strand-wrapped completion handler, it will no longer be > dispatched to the strand. > In a sense it's type-erasure but I would wrap each object of a stream (socket, ssl::stream) in another custom object so my wrappers could have a common base and use polymorphism to achieve what they want. I'm also not quite sure why there would be buffer performance issues; we would just fix ourselves to one buffer type (Not ideal). It was not elegant and I was not happy with it anyway but your point about completion tokens and strands is something I didn't think of. If we were going to use this, effectively custom AsyncStream, we would have to have multiple API, one for stranded on an executor and one not. This would either lock in our stranding execution or combinatorially blow out the interface. This is sounding much worse than it was before; I think I need to go to ASIO confessional :) The obvious solution is to write your algorithm as functions and > classes templated on the AsyncStream, and use only what the concepts > allow. For the parts which don't fit the concept (such as performing > the SSL shutdown handshake, or doing the TCP/IP shutdown on a plain > socket) I have provided an example using the Curiously Recurring > Template Pattern for how that may be achieved. > This was my initial thought too, but my boss was reluctant to have multiple heavyweight object constructions the more layers we put on. Pooling these becomes a little harder too due to the disparate types due to templatisation. > > I have gone down the road of type-erasing streams, completion > handlers, and buffers, and I strongly advise against it due to the > performance and maintenance problems it causes. > Yes, it's all getting messy and has caused me some sleepless nights thinking about this. I would prefer to use the canonical methodology of ASIO, but we do have certain requirements we need to meet. However as said early, the completion handler problem is a big one and I don't think my initial idea would support that at all. Also a common base in socket/ssl::stream wouldn't help either. For now I think I'll forge ahead with the CRTP method and relay these issues to my boss. BTW: Small aside. Looking through your flex example briefly, I notice you wrap everything in a strand, however HTTP being a half duplex protocol doesn't really need that. What's your reason there? I might have missed something in the code. > > Thanks > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > |
From: Vinnie F. <vin...@gm...> - 2017-08-11 14:19:31
|
On Fri, Aug 11, 2017 at 2:45 AM, Aaron Koolen-Bourke <aar...@gm...> wrote: > So in effect we'd be creating and destroying up to 3 session objects per > client connection. We really wanted to avoid this if we can, especially if > the session object starts to get heavyweight. It sounds like you need two separate SSL streams, each with their own state. I don't see how changing the style of interface avoids that. > I feel that the solution is going to be custom wrappers around all the > asio::socket and ssl::stream objects so we can treat them with a > standardised interface asio::socket and ssl::stream already have "standardised interfaces" called AsyncReadStream and AsyncWriteStream (plus their synchronous equivalents): <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/AsyncReadStream.html> <http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/reference/AsyncWriteStream.html> > unfortunately this would mean we'd need to have > concrete versions of an async_read_some etc with concrete completion token > (likely std::function) and buffer types. My boss is preferring this method > for flexibility of adding whatever layer we want You are describing type-erasure of the underlying stream. I see only downsides to this, especially with respect to performance, since type-erasing buffer sequences usually means a dynamic allocation (e.g. `std::vector<asio::const_buffer>`). And if you type-erase the completion handler then you will break invariants. For example if you type-erase a strand-wrapped completion handler, it will no longer be dispatched to the strand. > I wanted to come up > with some way to avoid it so we can don't have to have this custom stuff and > concrete behaviour all through our software. The obvious solution is to write your algorithm as functions and classes templated on the AsyncStream, and use only what the concepts allow. For the parts which don't fit the concept (such as performing the SSL shutdown handshake, or doing the TCP/IP shutdown on a plain socket) I have provided an example using the Curiously Recurring Template Pattern for how that may be achieved. I have gone down the road of type-erasing streams, completion handlers, and buffers, and I strongly advise against it due to the performance and maintenance problems it causes. Thanks |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-11 09:46:07
|
Hi Vinnie. I saw your implementation in Beast and for basic upgrade it would probably suffice but we have a requirement that we need to potentially layer double SSL (or even our own stream layer) over the SSL stream. These two things aren't necessarily knowable at the same time either so we would be recreating the session potentially multiple times and effectively "restarting" the session, each time using a "NextStream" template argument. In detail, on the initial read we might detect SSL and upgrade then because the client is connecting to us via SSL (We are a proxy). That's not too bad at that point as little has happened on the connection. Once in SSL we might get a CONNECT request from the client and then the tunnel attempts to perform SSL over that. Hence double SSL. This second SSL is likely us inspecting the content for filtering purposes. At this point we would then need to create another SSL session with some other templated "NextStream". So in effect we'd be creating and destroying up to 3 session objects per client connection. We really wanted to avoid this if we can, especially if the session object starts to get heavyweight. I feel that the solution is going to be custom wrappers around all the asio::socket and ssl::stream objects so we can treat them with a standardised interface, unfortunately this would mean we'd need to have concrete versions of an async_read_some etc with concrete completion token (likely std::function) and buffer types. My boss is preferring this method for flexibility of adding whatever layer we want, but I wanted to come up with some way to avoid it so we can don't have to have this custom stuff and concrete behaviour all through our software. Thanks On Fri, 11 Aug 2017 at 15:26 Vinnie Falco <vin...@gm...> wrote: > On Thu, Aug 10, 2017 at 8:03 PM, Aaron Koolen-Bourke > <aar...@gm...> wrote: > > Is there any recommended solution or do we have to end up with a solution > > involving concrete types or building our own SSL layering? > > I have solved this problem by using the "Curiously Recurring Template > Pattern." There is a complete working example of an HTTP server that > works with both plain and SSL connections using the same base class > logic. The code is well commented, you should have no trouble adapting > it: > > base class: > > < > https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L215 > > > > derived classes: > > < > https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L326 > > > > < > https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L374 > > > > This code also has the "async_detect_ssl" algorithm which lets the > same port work for both SSL and plain connections. This algorithm is > fully described in the Composed Operation Tutorial section of the > Beast documentation: > > < > http://www.boost.org/doc/libs/develop/libs/beast/doc/html/beast/using_io/example_detect_ssl.html > > > > The "Flex HTTP Server" example uses the detector to choose whether to > create a plain or SSL connection: > > < > https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L460 > > > > This code is all part of Boost.Beast which implements HTTP and > WebSocket on top of Boost.Asio in C++11: > > <https://github.com/boostorg/beast/> > > Hope this helps! > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio > |
From: Vinnie F. <vin...@gm...> - 2017-08-11 03:26:26
|
On Thu, Aug 10, 2017 at 8:03 PM, Aaron Koolen-Bourke <aar...@gm...> wrote: > Is there any recommended solution or do we have to end up with a solution > involving concrete types or building our own SSL layering? I have solved this problem by using the "Curiously Recurring Template Pattern." There is a complete working example of an HTTP server that works with both plain and SSL connections using the same base class logic. The code is well commented, you should have no trouble adapting it: base class: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L215> derived classes: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L326> <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L374> This code also has the "async_detect_ssl" algorithm which lets the same port work for both SSL and plain connections. This algorithm is fully described in the Composed Operation Tutorial section of the Beast documentation: <http://www.boost.org/doc/libs/develop/libs/beast/doc/html/beast/using_io/example_detect_ssl.html> The "Flex HTTP Server" example uses the detector to choose whether to create a plain or SSL connection: <https://github.com/boostorg/beast/blob/d337339c028f247a91a9b2e771e7b31a085496aa/example/http/server/flex/http_server_flex.cpp#L460> This code is all part of Boost.Beast which implements HTTP and WebSocket on top of Boost.Asio in C++11: <https://github.com/boostorg/beast/> Hope this helps! |
From: Aaron Koolen-B. <aar...@gm...> - 2017-08-11 03:03:44
|
Hi all. I am on the master ASIO branch that reflects the Networking TS. I was looking to add SSL/TLS to our ASIO code and the obvious first choice is to use asio::ssl::stream. However I noticed that ssl::stream and tcp::socket share no common base. This makes it difficult to build an object that has a pointer to some "stream" object that dispatches according to SSL or not. Whilst a thin wrapper can be created to emulate this it obviously fails around the member function templates for async_read_some etc due to the lack of multiple dispatch in C++. We also need the ability to layer multiple streams (SSL, plain or our own filters) into this layered system so we can't guarantee a fixed set of types. Is there any recommended solution or do we have to end up with a solution involving concrete types or building our own SSL layering? Cheers |
From: Torsten R. <Torsten@Robitzki.de> - 2017-08-09 18:22:21
|
Hello, I currently write a Bluetooth LE (GATT) client abstraction library. As I need that library in conjunction with Boost.Asio, I thought it would be „smart“ to use a similar design as Asio does. I have asynchronous functions to which ein pass callbacks, which are called indirectly by io_service::run_one(). Currently, I use just the io_service to dispatch callbacks from the underlying OS to the thread that calls io_service::run_one() by posting the competition handler to the io_service (io_service::post()). Now, when I invoke an asynchronous function and an IO request is posted to the underlying OS, there is nothing pending on the io_service and thus io_service will stop (stopped() == true). So, when I want to provide a synchronous version of the asynchronous abstraction library function, this could look like this: peripheral connect( const scan_result& peripheral_identification, const service_characteristics_list_t& required_characteristics, std::chrono::milliseconds timeout, boost::asio::io_service& queue ) { std::unique_ptr< peripheral > result; bool timed_out = false; async_connect( [ &result ]( const peripheral &p ){ result.reset( new peripheral( p ) ); }, [ &timed_out ]( const connect_error& ){ timed_out = true; }, peripheral_identification, required_characteristics, timeout, queue ); boost::asio::io_service::work work( queue ); while ( !result.get() && !timed_out ) queue.run_one(); if ( timed_out ) throw connection_error( "Unable to establish connection (timeout)." ); return *result; } But I guess, I should better handle this in the asynchronous version of the function and have a work object alive, as long as the underlying OS IO is pending. Is there some more idiomatic Asio-ish way to adapt to a plattform API to Asio? Kind regards, Torsten |
From: Vinnie F. <vin...@gm...> - 2017-08-02 21:21:50
|
On Wed, Aug 2, 2017 at 2:01 PM, Benjamin Richner <be...@bl...> wrote: > I think he just strand-wrapped the async_write handler function > because it's a composed operation and he does not want multiple write > operations to interfere with eachother. Just because you have wrapped calls to async_write in a strand doesn't mean you can issue two of them concurrently and get defined behavior. If you want to have two separate writes, you need to implement a write queue yourself. > concurrent queues can be implemented super fast with C++11's > compare_exchange_weak operation There's no need to resort to this level of optimization. The cost of just one trip to the kernel to perform the async_write is orders of magnitude greater than the savings of an optimized concurrent queue. In the case of async_read_some you are better off using a relatively large buffer in order to get as much as you can in one call rather than reading a small amount (for example, calling boost::asio::async_read for exactly 16 bytes of some packet header). In the case of using the large buffer you will have to perform additional memcpy to get the data where you need it, but its still much cheaper than calling async_read_some a second time. I used this principle to speed up asynchronous Beast websocket reads by a factor of 3. > so you only use the strand to post/dispatch calls to socket member > functions, but NOT to wrap any handlers. Correct me if I'm wrong. When calling any socket member function, or when providing any completion handler you MUST use either an implicit or explicit strand - no exceptions. * implicit strand: Only one thread calling io_service::run * explicit strand: The type io_service::strand, or your own scheduling method The documentation makes this crystal clear: <http://www.boost.org/doc/libs/1_46_0/doc/html/boost_asio/overview/core/strands.html> One approach used by advanced applications is to have one io_service instance per I/O thread. No strand is needed, as there is only one thread calling an instance of io_service::run. You will need to distribute the load accordingly, and possibly handle the case where a socket on one io_service wants to interact with a socket on another io_service. Thanks |
From: Allen <all...@gm...> - 2017-08-02 21:07:31
|
My only thought is that !socket_.is_open() might be an error condition if you didn't explicitly close the socket, so that's the way I treat it. I don't know if that can happen or not, but I think it's a good practice to have your code check for unexpected conditions. On Wed, Aug 2, 2017 at 5:01 PM, Benjamin Richner <be...@bl...> wrote: > After Vinnie's last answer, I went and watched Christopher Kohlhoff's talk > on asio and together with the replies here, it helped me understand it much > better. > > Video: https://www.youtube.com/watch?v=D-lTwGJRx0o > Slides: > https://raw.githubusercontent.com/boostcon/2011_presentations/master/mon/thinking_asynchronously.pdf > > He would call > if (!socket_.is_open()) return; > right at the first line of each handler (slide 63). That takes care of any > stray handlers after close(). > > The slides contain some goodies on 4 different ways to use asio: (1) single > threaded, (2) use thread for long-running tasks, (3) multiple io_services, > one thread, (4) one io_service, multiple threads. Imo it makes sense to go > for (4) right away, if you get multithreading right you have maximum > flexibility and can make as many workers as you want, or just one. > > For my case, which is (4), he recommends having a strand for each socket and > wrap handler functions into that strand. Additionally, he would > post/dispatch all socket member function calls into that strand too (slides > 89, 90, 91). I think he just strand-wrapped the async_write handler function > because it's a composed operation and he does not want multiple write > operations to interfere with eachother. You could use a message queue for > outgoing messages to make sure that doesn't happen and avoid the strand > wrapping (concurrent queues can be implemented super fast with C++11's > compare_exchange_weak operation), so you only use the strand to > post/dispatch calls to socket member functions, but NOT to wrap any > handlers. Correct me if I'm wrong. > > I'll start adjusting my code with this information. > > Cheers, > Benjamin > > > On 01.08.2017 18:19, Allen wrote: >> >> In thinking about this further, the only two calls that could happen >> concurrently on the acceptor are close and accept_async. it would >> make sense to prevent these from happening concurrently, so I put a >> spin lock on them. >> >> On Sun, Jul 30, 2017 at 4:10 PM, Allen <all...@gm...> wrote: >>> >>> I think it's ok to close the acceptor asynchronously. That triggers >>> the async_accept to call its completion handler with an error >>> condition, so either way, upon acceptor close or upon a new >>> connection, the async_accept handler is going to get called. Note >>> that the async_accept also adds work to the queue that prevents the >>> run threads from returning, so the signals.async_wait is not needed as >>> long as the io_service has a listening socket, however the >>> signals.async_wait is required with a server that only initiates >>> outgoing connections, and works just as well with a server that has a >>> listening socket. >>> >>> >>> On Sun, Jul 30, 2017 at 1:22 PM, Vinnie Falco <vin...@gm...> >>> wrote: >>>> >>>> On Sun, Jul 30, 2017 at 10:13 AM, Allen <all...@gm...> wrote: >>>>> >>>>> ok, looking closer again... >>>> >>>> I admit, this sounds very convincing :) >>>> >>>> One last point, what if the acceptor has a pending completion at the >>>> time the SIGTERM handler is invoked? >>>> >>>> If it iterates the list before the completion is processed, then a new >>>> connection will be established after you have already closed the >>>> existing connections. >>>> >>>> Thanks >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> asio-users mailing list >>>> asi...@li... >>>> https://lists.sourceforge.net/lists/listinfo/asio-users >>>> _______________________________________________ >>>> Using Asio? List your project at >>>> http://think-async.com/Asio/WhoIsUsingAsio >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> asio-users mailing list >> asi...@li... >> https://lists.sourceforge.net/lists/listinfo/asio-users >> _______________________________________________ >> Using Asio? List your project at >> http://think-async.com/Asio/WhoIsUsingAsio > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Benjamin R. <be...@bl...> - 2017-08-02 21:01:55
|
After Vinnie's last answer, I went and watched Christopher Kohlhoff's talk on asio and together with the replies here, it helped me understand it much better. Video: https://www.youtube.com/watch?v=D-lTwGJRx0o Slides: https://raw.githubusercontent.com/boostcon/2011_presentations/master/mon/thinking_asynchronously.pdf He would call if (!socket_.is_open()) return; right at the first line of each handler (slide 63). That takes care of any stray handlers after close(). The slides contain some goodies on 4 different ways to use asio: (1) single threaded, (2) use thread for long-running tasks, (3) multiple io_services, one thread, (4) one io_service, multiple threads. Imo it makes sense to go for (4) right away, if you get multithreading right you have maximum flexibility and can make as many workers as you want, or just one. For my case, which is (4), he recommends having a strand for each socket and wrap handler functions into that strand. Additionally, he would post/dispatch all socket member function calls into that strand too (slides 89, 90, 91). I think he just strand-wrapped the async_write handler function because it's a composed operation and he does not want multiple write operations to interfere with eachother. You could use a message queue for outgoing messages to make sure that doesn't happen and avoid the strand wrapping (concurrent queues can be implemented super fast with C++11's compare_exchange_weak operation), so you only use the strand to post/dispatch calls to socket member functions, but NOT to wrap any handlers. Correct me if I'm wrong. I'll start adjusting my code with this information. Cheers, Benjamin On 01.08.2017 18:19, Allen wrote: > In thinking about this further, the only two calls that could happen > concurrently on the acceptor are close and accept_async. it would > make sense to prevent these from happening concurrently, so I put a > spin lock on them. > > On Sun, Jul 30, 2017 at 4:10 PM, Allen <all...@gm...> wrote: >> I think it's ok to close the acceptor asynchronously. That triggers >> the async_accept to call its completion handler with an error >> condition, so either way, upon acceptor close or upon a new >> connection, the async_accept handler is going to get called. Note >> that the async_accept also adds work to the queue that prevents the >> run threads from returning, so the signals.async_wait is not needed as >> long as the io_service has a listening socket, however the >> signals.async_wait is required with a server that only initiates >> outgoing connections, and works just as well with a server that has a >> listening socket. >> >> >> On Sun, Jul 30, 2017 at 1:22 PM, Vinnie Falco <vin...@gm...> wrote: >>> On Sun, Jul 30, 2017 at 10:13 AM, Allen <all...@gm...> wrote: >>>> ok, looking closer again... >>> I admit, this sounds very convincing :) >>> >>> One last point, what if the acceptor has a pending completion at the >>> time the SIGTERM handler is invoked? >>> >>> If it iterates the list before the completion is processed, then a new >>> connection will be established after you have already closed the >>> existing connections. >>> >>> Thanks >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> asio-users mailing list >>> asi...@li... >>> https://lists.sourceforge.net/lists/listinfo/asio-users >>> _______________________________________________ >>> Using Asio? List your project at >>> http://think-async.com/Asio/WhoIsUsingAsio > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > asio-users mailing list > asi...@li... > https://lists.sourceforge.net/lists/listinfo/asio-users > _______________________________________________ > Using Asio? List your project at > http://think-async.com/Asio/WhoIsUsingAsio |
From: Allen <all...@gm...> - 2017-08-01 16:19:16
|
In thinking about this further, the only two calls that could happen concurrently on the acceptor are close and accept_async. it would make sense to prevent these from happening concurrently, so I put a spin lock on them. On Sun, Jul 30, 2017 at 4:10 PM, Allen <all...@gm...> wrote: > I think it's ok to close the acceptor asynchronously. That triggers > the async_accept to call its completion handler with an error > condition, so either way, upon acceptor close or upon a new > connection, the async_accept handler is going to get called. Note > that the async_accept also adds work to the queue that prevents the > run threads from returning, so the signals.async_wait is not needed as > long as the io_service has a listening socket, however the > signals.async_wait is required with a server that only initiates > outgoing connections, and works just as well with a server that has a > listening socket. > > > On Sun, Jul 30, 2017 at 1:22 PM, Vinnie Falco <vin...@gm...> wrote: >> On Sun, Jul 30, 2017 at 10:13 AM, Allen <all...@gm...> wrote: >>> ok, looking closer again... >> >> I admit, this sounds very convincing :) >> >> One last point, what if the acceptor has a pending completion at the >> time the SIGTERM handler is invoked? >> >> If it iterates the list before the completion is processed, then a new >> connection will be established after you have already closed the >> existing connections. >> >> Thanks >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> asio-users mailing list >> asi...@li... >> https://lists.sourceforge.net/lists/listinfo/asio-users >> _______________________________________________ >> Using Asio? List your project at >> http://think-async.com/Asio/WhoIsUsingAsio |