You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(52) |
Jun
(30) |
Jul
(17) |
Aug
(9) |
Sep
(4) |
Oct
(7) |
Nov
(11) |
Dec
(19) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
|
Feb
(1) |
Mar
(37) |
Apr
(28) |
May
(15) |
Jun
(28) |
Jul
(7) |
Aug
(125) |
Sep
(116) |
Oct
(85) |
Nov
(14) |
Dec
(6) |
2009 |
Jan
(11) |
Feb
(4) |
Mar
(5) |
Apr
|
May
(9) |
Jun
(5) |
Jul
(4) |
Aug
(40) |
Sep
(1) |
Oct
(19) |
Nov
(43) |
Dec
(45) |
2010 |
Jan
(76) |
Feb
(95) |
Mar
(3) |
Apr
(23) |
May
(39) |
Jun
(54) |
Jul
(6) |
Aug
(13) |
Sep
(12) |
Oct
(59) |
Nov
(53) |
Dec
(43) |
2011 |
Jan
(43) |
Feb
(44) |
Mar
(25) |
Apr
(23) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(1) |
Nov
(2) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(6) |
Oct
|
Nov
|
Dec
|
From: John P. F. <jf...@ov...> - 2009-01-13 21:00:09
|
Dean Michael Berris wrote: > Hi John, > > On Tue, Jan 13, 2009 at 2:02 PM, John P. Feltz <jf...@ov...> wrote: > >> Dean Michael Berris wrote: >> >>> Hi John, >>> >>> On Mon, Jan 12, 2009 at 11:50 PM, John P. Feltz <jf...@ov...> wrote: >>> >>> >>>> This would replace get() and other applicable base post/put/del/head's. >>>> >>>> >>>> >>> Are you sure you really want to make the API's more complicated than >>> it currently stands? >>> >>> >>> >> In this case: Yes, and this is only the tip of the ice-berg. >> > > I think we may be running into some needless rigidity or complexity > here. Let me address some of the issues you've raised below. > > >>> I like the idea of doing: >>> >>> using http::request; >>> using http::response; >>> typedef client<message_tag, 1, 1, policies<persistent, caching, >>> cookies> > http_client; >>> http_client client_; >>> request request_("http://boost.org"); >>> response response_ = client_.get(request_); >>> >>> And keeping the API simple. >>> >>> >> I do not want to confuse things by trying to equate the above exactly >> with basic_client in my branch. I'll start by saying that first -from >> implementation experience, there is no such thing as a 1.1. or 1.0 >> client. At best you have an old 1.0, a revised 1.0, and 1.1. The >> specifications by which this basic_client functions off of are the only >> feasible way to identify and delineate it's behavior. This is why in my >> branch this takes the form of policies which are based off of either one >> specification, or a certain combination. >> >> > > I agree that based on experience there aren't strictly 1.0 or 1.1 > clients -- but the original intent of the library was to keep it > simple: so simple in fact that much of the nuances of the HTTP > protocol are hidden from the user. The aim is not to treat the users > as "stupid" but to give them a consistent and simple API from which > they can write their more important application logic around. > > I'm not suggesting getting rid of this simple API. rfc based policies are crucial as a foundation for defining client behavior at certain level. I'm suggesting that level to be at the implementation. A wrapper can be provided with certain predefined options, be those polices or non-policies to do what was the original design goal for the client class. > I appreciate the idea of using policies, but the idea of the policies > as far as generic programming and design goes allows for well-defined > points of specialization. Let me try an example (untested): > > template < > class Tag, > class Policies, > unsigned version_major, > unsigned version_minor > > class basic_client : mpl:inherit_linearly<Policies>::type { > private: > typedef mpl:inherit_linearly<Policies>::type base; > // ... > basic_response<Tag> get(basic_request<Tag> const & request) { > shared_ptr<socket> socket_ = base::init_connection(request); > return base::perform_request(request, "GET"); > } > } > whatever implementation of init_connection is available to the > available policies. Of course this assumes that the policies used are > orthogonal. Design-wise it would be nice to enforce at compile-time > the concepts of the policies valid for the basic_client, but that's > for later. ;) > Of the necessary concerns, I see the need to provide one for supplying the resolved, the sockets, and logic governing both. These seem nonorthogonal, and so facading is more appropriate for right now. Additionally, I don't write code through the lens of a policy based strategy until I've seen what the implementation requires. > The lack of an existing HTTP Client that's "really 1.0" or "really > 1.1" is not an excuse to come up with one. ;-) > I'm sure users wishing to optimize requests to legacy servers or servers which do not follow a complete 1.1 spec will find that rather frustrating. Especially since it something so simple to design for (see branch). > So what I'm saying really is, we have a chance (since this is > basically from scratch anyway) to be able to do something that others > have avoided: writing a simple HTTP client which works when you say > it's really 1.0, or that it's really 1.1 -- in the simplest way > possible. The reason they're templates are that if you don't like how > they're implemented, then you can specialize them to your liking. :-) > This is a fine goal, however assuming that the simplest design will address what I believe to be all the complex concerns is flawed. Let us solve the complex issues _first_, then simplify. There is nothing wrong with the basic_clients API in principle, though I think this should address complex users needs first and so delegation of that class to single request processing + a wrapper incorporating certain policies with predefined behavior is the best course for now. >> Speaking of ice-bergs, while I do appreciate the original intentions of >> simplicity behind the client's API, due to expanding implementation >> concerns and overlooked error handling issues, this view might warrant >> changing. Consider the case where prior-cached connection fails: a >> connection which was retrieved from a supplier external to the >> client-with the supplier being possibly shared by other clients. This >> poses a problem for the user. If the connection was retrieved and used >> as part of a forwarded request several levels deep, the resulting error >> isn't going to be something easily identifiable or managed. While this >> is perhaps a case for encapsulating the client completely, it all >> depends on how oblivious we expect users of this basic_client to be. At >> the moment, I had planned in the next branch release that auto >> forwarding and dealing with persistent connections to be something >> removed from the basic_client. Instead a optional wrapper would perform >> the "driving" of a forwarded request and additionally encapsulate the >> connection/resolved cache. This would take the shape of your previous >> source example and I don't see this as a significant change. If this >> could be done at the basic_client level through a policy configuration >> than I would support that as well, however for _right now_, I don't see >> an easy to way to do that. >> > > Actually when it comes to error-handling efficiency (i.e. avoiding > exceptions which I think are perfectly reasonable to have), I would > have been happy with something like this: > > template <...> > class basic_client { > basic_response<Tag> get(basic_request<Tag> const & request); > tuple<basic_response<Tag>, error_code> get(basic_request<Tag> const > & request, no_throw_t(*)()); > } > > This way, if you call 'get' with the nothrow function pointer > argument, then you've got yourself covered -- and instead of a > throwing implementation, the client can return a pair containing the > (possibly empty) basic_response<Tag> and an (possibly > default-constructed) error_code. > I don't see where function pointers belong in a client which intends to remain simple. I'm also against a no-throw parameter because it is less explicit. If an exception is thrown then the user knows there's an issue and a valid response was not returned. If a response is returned either way, than it is easier for the user to miss. > About expecting the users to be oblivious, yes this is part of the > point -- I envisioned not making the user worry about recoverable > errors from within the client. There's even a way to make this happen > without making the implementation too complicated. I can say I'm > working on my own refactorings, but I'm doing it as part of Friendster > at the moment so I need to clear them first before releasing as open > source. > > It's more or less what I'd call a "just work" API -- and in case of > unrecoverable failures, throw/return an error. > > >>>> The deviations would be based off two criteria: >>>> -The specification(ie: rfc1945) by which the request_policy processes >>>> the request (it's coupled with policy) >>>> >>>> >>> The get/put/head/delete/post(...) functions don't have to be too >>> complicated. If it's already part of the policies chosen, we can have >>> a default deviation as part of the signature. At most we can provide >>> an overload to the existing API instead of replacing the simple API >>> we've already been presenting. >>> >>> The goal is really simplicity more than really sticking hard to standards. >>> >>> >>> >> A default is fine. >> >>>> -In cases that, while still allowing processing of a get/post() etc, >>>> would do something counter to what the user expects from the interface, >>>> such as a unmatched http version or persistence. >>>> >>>> >>>> >>> Actually, if you notice HTTP 1.1 is meant to be backwards compatible >>> to HTTP 1.0. At best, you just want to make the version information >>> available in the response and let the users deal with a different HTTP >>> version in the response rather than making the library needlessly >>> complicated in terms of API. >>> >>> >> If the user receives a version that is out of spec -in many cases they >> have a strong reason not to complete a request. This is important for >> both efficiency and compliance. >> > > Actually, there's nothing in the HTTP 1.0 spec that says a response > that's HTTP 1.x where x != 0 is an invalid response. There's also > nothing in the spec that says that HTTP 1.1 requests cannot be > completed when the HTTP 1.0 response is received. > These are cases that I'm not concerned with. > There are however some servers which will not accept certain HTTP > versions -- sometimes you can write an HTTP Server that will send an > HTTP error 505 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.6) > when it doesn't support the version you're trying to use as far as > HTTP is concerned. I don't know of any servers though which will not > honor HTTP 1.0 client requests and respond accordingly. > > BTW, if you notice, HTTP 1.1 enables some new things that were not > possible with HTTP 1.0 (or were technically unsupported) like > streaming encodings and persistent connections / request pipelining on > both the client and server side. The specification is clear about what > the client should and should not do when certain things happen in > these persistent connections and pipelined request scenarios. > > So I don't see how strict compliance cannot be possible with the > current API and with the current design of the client. > > >>> I can understand the need for the asynchronous client to be a little >>> bit more involved (like taking a reference to an io_service at >>> construction time and allowing the io_service object to be run >>> externally of the HTTP client) and taking functions that deal with raw >>> buffers or even functions that deal with already-crafted >>> http::response objects. However even in these situations, let's try to >>> be consistent with the theme of simplicity of an API. >>> >>> >> I have no comment regarding a-sync usage as I've not looked into that >> issue in depth, I've only tried to make existing policy functions as >> re-usable as possible for that case. >> > > Which I think is the wrong approach if you ask me. > > By existing I was referring to things like stateless items such as append_* in the branch. If these can't be used asynchronously I would be curious as to why. I've expended no other effort in regards to this. > The idea (and the process) of generic programming will usually lead > you from specific to generic. My best advice (?) would be to implement > the specific first, write the tests/specifications (if you're doing > Behavior Driven Development) and then refactor mercilessly in the end. > > The aim of the client is to be generic as well as compliant up to a > point. HTTP 1.0 doesn't support persistent connections, although > extensions were made with 'Connection: Keep-Alive' and the > 'Keep-Alive: ...' headers -- these can be supported, but they're going > to be different specializations of the basic_client. > > >>> I particularly don't like the idea that I need to set up deviations >>> when I've already chosen which HTTP version I want to use -- and that >>> deviations should complicate my usage of an HTTP client when I only >>> usually want to get the body of the response instead of sticking hard >>> to the standard. ;) >>> >>> >> A default is fine. >> >>> If you meant that these were to be overloads instead of replacements >>> (as exposed publicly through the basic_client<> specializations) then >>> I wouldn't have any objections to them. At this time, I need to see >>> the code closer to see what you intend to do with it. :) >>> >>> HTH >>> >>> >>> >> Derived overloads might work, though you run into cases of un-orthogonal >> policies (at least I have with this). That would also require >> specialization and/or sub-classing of a deviation/non-deviation rfc >> policies in my branch and I would prefer to keep the current set for now. >> >> > > I think if you really want to be able to decompose the client into > multiple orthogonal policies, you might want to look into introducing > more specific extension points in the code instead of being hampered > by earlier (internal) design decisions. ;) > > I am growing tired of discussing this and would prefer to just get on with implementing _something_ in the branch. After tomorrow I'll be going back to my day job and time for working on this will be limited and I don't want to spend that time debating design decisions in which there can be very little compromise. |
From: Dean M. B. <mik...@gm...> - 2009-01-13 05:38:28
|
Hi John, On Tue, Jan 13, 2009 at 2:02 PM, John P. Feltz <jf...@ov...> wrote: > Dean Michael Berris wrote: >> Hi John, >> >> On Mon, Jan 12, 2009 at 11:50 PM, John P. Feltz <jf...@ov...> wrote: >> >>> This would replace get() and other applicable base post/put/del/head's. >>> >>> >> >> Are you sure you really want to make the API's more complicated than >> it currently stands? >> >> > In this case: Yes, and this is only the tip of the ice-berg. I think we may be running into some needless rigidity or complexity here. Let me address some of the issues you've raised below. >> I like the idea of doing: >> >> using http::request; >> using http::response; >> typedef client<message_tag, 1, 1, policies<persistent, caching, >> cookies> > http_client; >> http_client client_; >> request request_("http://boost.org"); >> response response_ = client_.get(request_); >> >> And keeping the API simple. >> > I do not want to confuse things by trying to equate the above exactly > with basic_client in my branch. I'll start by saying that first -from > implementation experience, there is no such thing as a 1.1. or 1.0 > client. At best you have an old 1.0, a revised 1.0, and 1.1. The > specifications by which this basic_client functions off of are the only > feasible way to identify and delineate it's behavior. This is why in my > branch this takes the form of policies which are based off of either one > specification, or a certain combination. > I agree that based on experience there aren't strictly 1.0 or 1.1 clients -- but the original intent of the library was to keep it simple: so simple in fact that much of the nuances of the HTTP protocol are hidden from the user. The aim is not to treat the users as "stupid" but to give them a consistent and simple API from which they can write their more important application logic around. I appreciate the idea of using policies, but the idea of the policies as far as generic programming and design goes allows for well-defined points of specialization. Let me try an example (untested): template < class Tag, class Policies, unsigned version_major, unsigned version_minor > class basic_client : mpl:inherit_linearly<Policies>::type { private: typedef mpl:inherit_linearly<Policies>::type base; // ... basic_response<Tag> get(basic_request<Tag> const & request) { shared_ptr<socket> socket_ = base::init_connection(request); return base::perform_request(request, "GET"); } } Here the example is contrived so that the idea simply is to defer whatever implementation of init_connection is available to the available policies. Of course this assumes that the policies used are orthogonal. Design-wise it would be nice to enforce at compile-time the concepts of the policies valid for the basic_client, but that's for later. ;) The lack of an existing HTTP Client that's "really 1.0" or "really 1.1" is not an excuse to come up with one. ;-) So what I'm saying really is, we have a chance (since this is basically from scratch anyway) to be able to do something that others have avoided: writing a simple HTTP client which works when you say it's really 1.0, or that it's really 1.1 -- in the simplest way possible. The reason they're templates are that if you don't like how they're implemented, then you can specialize them to your liking. :-) > Speaking of ice-bergs, while I do appreciate the original intentions of > simplicity behind the client's API, due to expanding implementation > concerns and overlooked error handling issues, this view might warrant > changing. Consider the case where prior-cached connection fails: a > connection which was retrieved from a supplier external to the > client-with the supplier being possibly shared by other clients. This > poses a problem for the user. If the connection was retrieved and used > as part of a forwarded request several levels deep, the resulting error > isn't going to be something easily identifiable or managed. While this > is perhaps a case for encapsulating the client completely, it all > depends on how oblivious we expect users of this basic_client to be. At > the moment, I had planned in the next branch release that auto > forwarding and dealing with persistent connections to be something > removed from the basic_client. Instead a optional wrapper would perform > the "driving" of a forwarded request and additionally encapsulate the > connection/resolved cache. This would take the shape of your previous > source example and I don't see this as a significant change. If this > could be done at the basic_client level through a policy configuration > than I would support that as well, however for _right now_, I don't see > an easy to way to do that. Actually when it comes to error-handling efficiency (i.e. avoiding exceptions which I think are perfectly reasonable to have), I would have been happy with something like this: template <...> class basic_client { basic_response<Tag> get(basic_request<Tag> const & request); tuple<basic_response<Tag>, error_code> get(basic_request<Tag> const & request, no_throw_t(*)()); } This way, if you call 'get' with the nothrow function pointer argument, then you've got yourself covered -- and instead of a throwing implementation, the client can return a pair containing the (possibly empty) basic_response<Tag> and an (possibly default-constructed) error_code. About expecting the users to be oblivious, yes this is part of the point -- I envisioned not making the user worry about recoverable errors from within the client. There's even a way to make this happen without making the implementation too complicated. I can say I'm working on my own refactorings, but I'm doing it as part of Friendster at the moment so I need to clear them first before releasing as open source. It's more or less what I'd call a "just work" API -- and in case of unrecoverable failures, throw/return an error. >>> The deviations would be based off two criteria: >>> -The specification(ie: rfc1945) by which the request_policy processes >>> the request (it's coupled with policy) >>> >> >> The get/put/head/delete/post(...) functions don't have to be too >> complicated. If it's already part of the policies chosen, we can have >> a default deviation as part of the signature. At most we can provide >> an overload to the existing API instead of replacing the simple API >> we've already been presenting. >> >> The goal is really simplicity more than really sticking hard to standards. >> >> > A default is fine. >>> -In cases that, while still allowing processing of a get/post() etc, >>> would do something counter to what the user expects from the interface, >>> such as a unmatched http version or persistence. >>> >>> >> >> Actually, if you notice HTTP 1.1 is meant to be backwards compatible >> to HTTP 1.0. At best, you just want to make the version information >> available in the response and let the users deal with a different HTTP >> version in the response rather than making the library needlessly >> complicated in terms of API. >> > If the user receives a version that is out of spec -in many cases they > have a strong reason not to complete a request. This is important for > both efficiency and compliance. Actually, there's nothing in the HTTP 1.0 spec that says a response that's HTTP 1.x where x != 0 is an invalid response. There's also nothing in the spec that says that HTTP 1.1 requests cannot be completed when the HTTP 1.0 response is received. There are however some servers which will not accept certain HTTP versions -- sometimes you can write an HTTP Server that will send an HTTP error 505 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.6) when it doesn't support the version you're trying to use as far as HTTP is concerned. I don't know of any servers though which will not honor HTTP 1.0 client requests and respond accordingly. BTW, if you notice, HTTP 1.1 enables some new things that were not possible with HTTP 1.0 (or were technically unsupported) like streaming encodings and persistent connections / request pipelining on both the client and server side. The specification is clear about what the client should and should not do when certain things happen in these persistent connections and pipelined request scenarios. So I don't see how strict compliance cannot be possible with the current API and with the current design of the client. >> I can understand the need for the asynchronous client to be a little >> bit more involved (like taking a reference to an io_service at >> construction time and allowing the io_service object to be run >> externally of the HTTP client) and taking functions that deal with raw >> buffers or even functions that deal with already-crafted >> http::response objects. However even in these situations, let's try to >> be consistent with the theme of simplicity of an API. >> > I have no comment regarding a-sync usage as I've not looked into that > issue in depth, I've only tried to make existing policy functions as > re-usable as possible for that case. Which I think is the wrong approach if you ask me. The idea (and the process) of generic programming will usually lead you from specific to generic. My best advice (?) would be to implement the specific first, write the tests/specifications (if you're doing Behavior Driven Development) and then refactor mercilessly in the end. The aim of the client is to be generic as well as compliant up to a point. HTTP 1.0 doesn't support persistent connections, although extensions were made with 'Connection: Keep-Alive' and the 'Keep-Alive: ...' headers -- these can be supported, but they're going to be different specializations of the basic_client. >> I particularly don't like the idea that I need to set up deviations >> when I've already chosen which HTTP version I want to use -- and that >> deviations should complicate my usage of an HTTP client when I only >> usually want to get the body of the response instead of sticking hard >> to the standard. ;) >> > A default is fine. >> If you meant that these were to be overloads instead of replacements >> (as exposed publicly through the basic_client<> specializations) then >> I wouldn't have any objections to them. At this time, I need to see >> the code closer to see what you intend to do with it. :) >> >> HTH >> >> > Derived overloads might work, though you run into cases of un-orthogonal > policies (at least I have with this). That would also require > specialization and/or sub-classing of a deviation/non-deviation rfc > policies in my branch and I would prefer to keep the current set for now. > I think if you really want to be able to decompose the client into multiple orthogonal policies, you might want to look into introducing more specific extension points in the code instead of being hampered by earlier (internal) design decisions. ;) -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: John P. F. <jf...@ov...> - 2009-01-13 04:59:14
|
Dean Michael Berris wrote: > Hi John, > > On Mon, Jan 12, 2009 at 11:50 PM, John P. Feltz <jf...@ov...> wrote: > >> This would replace get() and other applicable base post/put/del/head's. >> >> > > Are you sure you really want to make the API's more complicated than > it currently stands? > > In this case: Yes, and this is only the tip of the ice-berg. > I like the idea of doing: > > using http::request; > using http::response; > typedef client<message_tag, 1, 1, policies<persistent, caching, > cookies> > http_client; > http_client client_; > request request_("http://boost.org"); > response response_ = client_.get(request_); > > And keeping the API simple. > I do not want to confuse things by trying to equate the above exactly with basic_client in my branch. I'll start by saying that first -from implementation experience, there is no such thing as a 1.1. or 1.0 client. At best you have an old 1.0, a revised 1.0, and 1.1. The specifications by which this basic_client functions off of are the only feasible way to identify and delineate it's behavior. This is why in my branch this takes the form of policies which are based off of either one specification, or a certain combination. Speaking of ice-bergs, while I do appreciate the original intentions of simplicity behind the client's API, due to expanding implementation concerns and overlooked error handling issues, this view might warrant changing. Consider the case where prior-cached connection fails: a connection which was retrieved from a supplier external to the client-with the supplier being possibly shared by other clients. This poses a problem for the user. If the connection was retrieved and used as part of a forwarded request several levels deep, the resulting error isn't going to be something easily identifiable or managed. While this is perhaps a case for encapsulating the client completely, it all depends on how oblivious we expect users of this basic_client to be. At the moment, I had planned in the next branch release that auto forwarding and dealing with persistent connections to be something removed from the basic_client. Instead a optional wrapper would perform the "driving" of a forwarded request and additionally encapsulate the connection/resolved cache. This would take the shape of your previous source example and I don't see this as a significant change. If this could be done at the basic_client level through a policy configuration than I would support that as well, however for _right now_, I don't see an easy to way to do that. >> The deviations would be based off two criteria: >> -The specification(ie: rfc1945) by which the request_policy processes >> the request (it's coupled with policy) >> > > The get/put/head/delete/post(...) functions don't have to be too > complicated. If it's already part of the policies chosen, we can have > a default deviation as part of the signature. At most we can provide > an overload to the existing API instead of replacing the simple API > we've already been presenting. > > The goal is really simplicity more than really sticking hard to standards. > > A default is fine. >> -In cases that, while still allowing processing of a get/post() etc, >> would do something counter to what the user expects from the interface, >> such as a unmatched http version or persistence. >> >> > > Actually, if you notice HTTP 1.1 is meant to be backwards compatible > to HTTP 1.0. At best, you just want to make the version information > available in the response and let the users deal with a different HTTP > version in the response rather than making the library needlessly > complicated in terms of API. > If the user receives a version that is out of spec -in many cases they have a strong reason not to complete a request. This is important for both efficiency and compliance. > I can understand the need for the asynchronous client to be a little > bit more involved (like taking a reference to an io_service at > construction time and allowing the io_service object to be run > externally of the HTTP client) and taking functions that deal with raw > buffers or even functions that deal with already-crafted > http::response objects. However even in these situations, let's try to > be consistent with the theme of simplicity of an API. > I have no comment regarding a-sync usage as I've not looked into that issue in depth, I've only tried to make existing policy functions as re-usable as possible for that case. > I particularly don't like the idea that I need to set up deviations > when I've already chosen which HTTP version I want to use -- and that > deviations should complicate my usage of an HTTP client when I only > usually want to get the body of the response instead of sticking hard > to the standard. ;) > A default is fine. > If you meant that these were to be overloads instead of replacements > (as exposed publicly through the basic_client<> specializations) then > I wouldn't have any objections to them. At this time, I need to see > the code closer to see what you intend to do with it. :) > > HTH > > Derived overloads might work, though you run into cases of un-orthogonal policies (at least I have with this). That would also require specialization and/or sub-classing of a deviation/non-deviation rfc policies in my branch and I would prefer to keep the current set for now. John |
From: Dean M. B. <mik...@gm...> - 2009-01-12 23:40:03
|
Hi John, On Mon, Jan 12, 2009 at 11:50 PM, John P. Feltz <jf...@ov...> wrote: > This would replace get() and other applicable base post/put/del/head's. > Are you sure you really want to make the API's more complicated than it currently stands? I like the idea of doing: using http::request; using http::response; typedef client<message_tag, 1, 1, policies<persistent, caching, cookies> > http_client; http_client client_; request request_("http://boost.org"); response response_ = client_.get(request_); And keeping the API simple. > The deviations would be based off two criteria: > -The specification(ie: rfc1945) by which the request_policy processes > the request (it's coupled with policy) The get/put/head/delete/post(...) functions don't have to be too complicated. If it's already part of the policies chosen, we can have a default deviation as part of the signature. At most we can provide an overload to the existing API instead of replacing the simple API we've already been presenting. The goal is really simplicity more than really sticking hard to standards. > -In cases that, while still allowing processing of a get/post() etc, > would do something counter to what the user expects from the interface, > such as a unmatched http version or persistence. > Actually, if you notice HTTP 1.1 is meant to be backwards compatible to HTTP 1.0. At best, you just want to make the version information available in the response and let the users deal with a different HTTP version in the response rather than making the library needlessly complicated in terms of API. I can understand the need for the asynchronous client to be a little bit more involved (like taking a reference to an io_service at construction time and allowing the io_service object to be run externally of the HTTP client) and taking functions that deal with raw buffers or even functions that deal with already-crafted http::response objects. However even in these situations, let's try to be consistent with the theme of simplicity of an API. I particularly don't like the idea that I need to set up deviations when I've already chosen which HTTP version I want to use -- and that deviations should complicate my usage of an HTTP client when I only usually want to get the body of the response instead of sticking hard to the standard. ;) If you meant that these were to be overloads instead of replacements (as exposed publicly through the basic_client<> specializations) then I wouldn't have any objections to them. At this time, I need to see the code closer to see what you intend to do with it. :) HTH -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: John P. F. <jf...@ov...> - 2009-01-12 14:47:02
|
This would replace get() and other applicable base post/put/del/head's. The deviations would be based off two criteria: -The specification(ie: rfc1945) by which the request_policy processes the request (it's coupled with policy) -In cases that, while still allowing processing of a get/post() etc, would do something counter to what the user expects from the interface, such as a unmatched http version or persistence. John Glyn Matthews wrote: > Hello John, > > > Firstly, I have noticed a lot of activity in subversion, but I've been so > far unable to take a look at your work in depth. > > 2009/1/10 John P. Feltz <jf...@ov...> > > >> My solution is to differ that choice to the user on a per request basis. >> This would be used to express certain server deviations from a protocol >> standard: >> > <snip /> > >> This makes a throw depend on a deviation. In any case, errors up to that >> point are still pushed to the stack which, with a response fragment, >> would accompany the exception. >> >> > > I think this is good, it gives the client more information about server > deviations and gives the user more control on how to deal with them. How do > you decide what flags to use in the response_deviation struct? Are you > proposing this as an overload of the get method or a replacement? > > > Glyn > > |
From: Glyn M. <gly...@gm...> - 2009-01-11 18:38:25
|
Hello John, Firstly, I have noticed a lot of activity in subversion, but I've been so far unable to take a look at your work in depth. 2009/1/10 John P. Feltz <jf...@ov...> > > My solution is to differ that choice to the user on a per request basis. > This would be used to express certain server deviations from a protocol > standard: > > <snip /> > > > This makes a throw depend on a deviation. In any case, errors up to that > point are still pushed to the stack which, with a response fragment, > would accompany the exception. > I think this is good, it gives the client more information about server deviations and gives the user more control on how to deal with them. How do you decide the what flags to use in the response_deviation struct? Are you proposing this as an overload of the get method or a replacement? Glyn |
From: John P. F. <jf...@ov...> - 2009-01-09 23:41:08
|
At the moment this a basic_client::get() request from my branch, which is working off the trunk interface: response const get( basic_request<tag> const & request_, const connection_type connection=persistent ) { return request_policy::sync_process (request_, "GET", true, follow_redirect_, connection, *connection_supplier_); }; Concerning that, I'm not satisfied with the way errors are handled (or lack there-of). Some errors can be coped with and still complete the request; others can't. The question is how to handle the former. My solution is to differ that choice to the user on a per request basis. This would be used to express certain server deviations from a protocol standard: IE: //rfc1945_extended is a request_policy based off the http 1.0 spec struct rfc1945_extended::response_deviation { //IE: the request would still continue even if the server didn't acknowledge persistence or non-persistence, or provided a Connection: not in accordance with the specification bool allow_incorrect_persistence; bool allow_version_mismatch; bool allow_missing_whatever_header; ... }; ... response const get(const response_deviation&, const std::stack<boost::error_code>& ...) This makes a throw depend on a deviation. In any case, errors up to that point are still pushed to the stack which, with a response fragment, would accompany the exception. Dean has also personally suggested an interface similar to this to toggle on/off throw, which I have modified for purposes of this discussion: std::stack<boost::system::error_code> errors; http::response response; http::rfc1945_extended::response_deviation deviation(..); tie(response, errors) = client.get(request, deviation, http::nothrow); Questions and comments welcome. John |
From: Glyn M. <gly...@gm...> - 2008-12-20 19:15:00
|
2008/12/20 John P. Feltz <jf...@ov...> > >> Respectfully request inclusion in project in order to deposit http > >> client changes/additions for review. > >> > >> Thanks, > >> John > > Yes, john_feltz. This is a refactoring of the synchronous client which > adds 1.1, in addition to support for 1.1 and 1.0 persistence. It might > warrant a branch as it has not been fully unit tested nor approved. > OK, you now have commit access to the subversion repository. Go ahed and make a new branch and check in your changes. Glyn |
From: John P. F. <jf...@ov...> - 2008-12-20 14:18:36
|
Glyn Matthews wrote: > Hi John, > > > Do you have a sourceforge user account? I'll need this in order to add you > as a member. > > Also, can you describe a little about what you've done with the client? > > Thanks, > Glyn > > > > 2008/12/18 John P. Feltz <jf...@ov...> > > >> Respectfully request inclusion in project in order to deposit http >> client changes/additions for review. >> >> Thanks, >> John >> >> >> Yes, john_feltz. This is a refactoring of the synchronous client which adds 1.1, in addition to support for 1.1 and 1.0 persistence. It might warrant a branch as it has not been fully unit tested nor approved. John |
From: Glyn M. <gly...@gm...> - 2008-12-19 10:51:41
|
Hi John, Do you have a sourceforge user account? I'll need this in order to add you as a member. Also, can you describe a little about what you've done with the client? Thanks, Glyn 2008/12/18 John P. Feltz <jf...@ov...> > Respectfully request inclusion in project in order to deposit http > client changes/additions for review. > > Thanks, > John > > > ------------------------------------------------------------------------------ > SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada. > The future of the web can't happen without you. Join us at MIX09 to help > pave the way to the Next Web now. Learn more and register at > > http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/ > _______________________________________________ > Cpp-netlib-devel mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cpp-netlib-devel > |
From: John P. F. <jf...@ov...> - 2008-12-18 18:20:27
|
Respectfully request inclusion in project in order to deposit http client changes/additions for review. Thanks, John |
From: Glyn M. <gly...@gm...> - 2008-12-10 07:58:08
|
Hi, 2008/12/10 Dean Michael Berris <mik...@gm...> > Thanks for this Cheng! > > Glyn, can we commit this to trunk? This looks fine, yes. Thanks, G |
From: Dean M. B. <mik...@gm...> - 2008-12-10 02:06:38
|
Thanks for this Cheng! Glyn, can we commit this to trunk? ---------- Forwarded message ---------- From: 连城 <rhy...@gm...> Date: Tue, Dec 9, 2008 at 8:27 PM Subject: Failed to compile SVN trunk code under cygwin To: "Dean Michael C. Berris" <cpp...@li...> Hi, all I just checked out the trunk code and played around. I found the trunk code cannot be compiled under Cygwin with Boost 1.36. Here I attached a working patch to trunk/libs/network/test/Jamfile.v2, wish this may help :-) Regards Cheng Index: libs/network/test/Jamfile.v2 =================================================================== --- libs/network/test/Jamfile.v2 (revision 116) +++ libs/network/test/Jamfile.v2 (working copy) @@ -4,6 +4,14 @@ # (See accompanying file LICENSE_1_0.txt or copy at # http://www.boost.org/LICENSE_1_0.txt) +import os ; + +if [ os.name ] = CYGWIN +{ + lib ws2_32 ; + lib mswsock ; +} + project network_test : requirements <include>../../../ @@ -14,6 +22,10 @@ <source>/boost//thread <source>/boost//filesystem <toolset>gcc:<linkflags>-lpthread + <os>cygwin,<toolset>gcc:<define>_WIN32_WINNT=0x0501 + <os>cygwin,<toolset>gcc:<define>__USE_W32_SOCKETS + <os>cygwin,<toolset>gcc:<library>ws2_32 + <os>cygwin,<toolset>gcc:<library>mswsock <toolset>msvc:<define>BOOST_ASIO_NO_WIN32_LEAN_AND_MEAN <toolset>msvc:<define>WIN32_LEAN_AND_MEAN <toolset>msvc:<define>_SCL_SECURE_NO_WARNINGS -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: Dean M. B. <mik...@gm...> - 2008-11-04 15:15:21
|
On Tue, Nov 4, 2008 at 11:08 PM, Rodrigo Madera <rod...@gm...> wrote: >> Are you talking about Boost.Array? Because the last time I checked, >> Boost.Array is statically-sized and is not as flexible as an >> std::string. IMO, using something from the STL is "better" because you >> don't need to rely on Boost being there to use the library. I won't >> even be surprised if it was possible to package the header-only Boost >> libs with cpp-netlib as a "standalone" download (with the help perhaps >> of bcp). Of course if we're using Boost.Asio we rely on Boost.System >> -- but other than that I'm confident there shouldn't be a lot of other >> libraries cpp-netlib would rely on (of course aside from the STL). > > I'm curious, why std::string and not std::vector<char>? > Because std::string and std::vector<char> are effectively the same. Aside from that std::string even has nice substring capabilities and is just very easy to use. Imagine trying to optimize copying of std::vector<char> around -- when std::string has the capability to do copy-on-write. In the next version of the standard though I think will invalidate most of the implementations of std::string with concurrency guarantees, but even then std::string and std::vector<char> will essentially stay mostly equivalent (and std::string makes more semantic sense if you're dealing with a string of characters). -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: Rodrigo M. <rod...@gm...> - 2008-11-04 15:09:03
|
> > Are you talking about Boost.Array? Because the last time I checked, > Boost.Array is statically-sized and is not as flexible as an > std::string. IMO, using something from the STL is "better" because you > don't need to rely on Boost being there to use the library. I won't > even be surprised if it was possible to package the header-only Boost > libs with cpp-netlib as a "standalone" download (with the help perhaps > of bcp). Of course if we're using Boost.Asio we rely on Boost.System > -- but other than that I'm confident there shouldn't be a lot of other > libraries cpp-netlib would rely on (of course aside from the STL). > I'm curious, why std::string and not std::vector<char>? Thanks, Rodrigo |
From: Dean M. B. <mik...@gm...> - 2008-11-04 12:42:53
|
On Tue, Nov 4, 2008 at 8:12 PM, Rodrigo Madera <rod...@gm...> wrote: >> What "buffer" class are you talking about? > > The class boost::asio::buffer was made for this. > And we use boost::asio::buffer's internally in the protocol implementations. The messages merely represent data -- treat them as data objects. If you check the HTTP client code, you will see that we use the correct buffer types there provided by Asio. Transporting whatever was gotten from the wire from the buffer to the user of the client is what the message is supposed to be -- nothing more, nothing less. > I don't believe that this code can make it into Boost using strings for > binary data. > I don't think so, considering that _you can customize the storage in the message class to whatever you like depending on the protocol you're implementing_ I believe the current design is definitely more superior than using unweildy buffers in the message representation. Think of it this way: Request Message -> Client -> Client performs actions Response Message <- Client The type of the request message is dependent on what client you're using. You can't use an http::request object to instruct an SNMP client -- this will fail at compile time *and is the intended design of the library*. So if you're implementing your own protocol, you define the type of the message you're going to use as a request and/or as a response object instance. This means you -- as the implementor -- define precisely what types to use within the message object specialization that you need. If you think you need fixed-size buffers (or sometimes bit-fields work too) then that's entirely up to you. For instance, you can even change the signature of your specialization of the basic_message class to reflect the signatures that you need. For example, if you're thinking of using tuples of booleans to set flags, then you can do that yourself: template <> struct basic_message<my_protocol_tag> { void header(tuple<bool, bool, bool> flags); tuple<bool, bool, bool> header(); }; This makes your specialization of the message class even more specific to your protocol. That means, you may choose to use the default (that uses strings) or you can write your own specialization depending on the tag you use to distinguish your protocol from the other protocols. > Anyways, it's your way of seeing things. Actually, it's not just my way of seeing things. The design decision is: 1. Since it's the most common use case for most higher level protocols like HTTP, SMTP, XMPP, IRC, then it's the most convenient thing to use -- and therefore what the default implementation uses. 2. The design of the whole infrastructure is meant to be extensible through the use of templates so in case any other protocol has different requirements, as long as they rely on the message abstraction and the message system, I'm confident it should be easy to implement whatever specializations are required to get the desired functionality on top of the existing design. > > I'll post my code later and we can discus approaches later. > It would be interesting to see your approach in code and see why you think using std::string's (which are perfectly fine on their own by the way able to hold binary data) is a bad thing. > Thank you for your kind time, No problem. Looking forward to your code to see what exactly the problem with using std::strings in binary protocols are. -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: Rodrigo M. <rod...@gm...> - 2008-11-04 12:12:30
|
> > What "buffer" class are you talking about? > The class boost::asio::buffer was made for this. I don't believe that this code can make it into Boost using strings for binary data. Anyways, it's your way of seeing things. I'll post my code later and we can discus approaches later. Thank you for your kind time, Rodrigo |
From: Dean M. B. <mik...@gm...> - 2008-11-04 09:30:54
|
On Tue, Nov 4, 2008 at 6:19 AM, Rodrigo Madera <rod...@gm...> wrote: > > So why not make this a buffer (or array) directly? Because std::string works fine. > Why have you choosen this string specialization path? Because it's the simplest thing that could possibly work. Besides, if we used anything other than an std::string the implementation would be way more complicated than necessary. -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: Dean M. B. <mik...@gm...> - 2008-11-04 09:28:31
|
On Tue, Nov 4, 2008 at 12:03 AM, Rodrigo Madera <rod...@gm...> wrote: > > On Mon, Nov 3, 2008 at 3:32 AM, Dean Michael Berris <mik...@gm...> > wrote: >> >> >> BTW, strings can contain binary data just fine (as long as you store >> the data (7-bit chars) as is). > > So are you saying that storing a binary block of data (say an image) into a > string is okay? Yes, it's okay. > Boost has a buffer class made for this. If you want integration with boost > you need to use boost. > What "buffer" class are you talking about? Are you talking about Boost.Array? Because the last time I checked, Boost.Array is statically-sized and is not as flexible as an std::string. IMO, using something from the STL is "better" because you don't need to rely on Boost being there to use the library. I won't even be surprised if it was possible to package the header-only Boost libs with cpp-netlib as a "standalone" download (with the help perhaps of bcp). Of course if we're using Boost.Asio we rely on Boost.System -- but other than that I'm confident there shouldn't be a lot of other libraries cpp-netlib would rely on (of course aside from the STL). > It makes no sense at all to put strings into every message. > Why doesn't it make sense? A message would consist of bytes -- the easiest way to deal with bytes is to use std::string. If you needed a different representation, you can always specialize the basic_message<> to your tag and use your own storage representation. The documentation shows you how to do that IIRC. -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: Glyn M. <gly...@gm...> - 2008-11-04 08:41:11
|
Rodrigo, 2008/11/3 Rodrigo Madera <rod...@gm...> > > So why not make this a buffer (or array) directly? > Why have you choosen this string specialization path? > Because that's what we felt was the most common usage scenario, so this was accepted as the default implementation. Glyn |
From: Rodrigo M. <rod...@gm...> - 2008-11-03 22:19:15
|
> > So are you saying that storing a binary block of data (say an image) into a >> string is okay? >> Boost has a buffer class made for this. If you want integration with boost >> you need to use boost. >> >> It makes no sense at all to put strings into every message. >> > > It's possible to specialize the message to support different containers. > Something like this: > > namespace boost { namespace network { > template <> > struct string<binary_parser_tag> { > typedef boost::array<T, 1024> type; > }; > }} > > Or whatever structure suits your needs. See > <boost/network/message/traits/string.hpp> > > So why not make this a buffer (or array) directly? Why have you choosen this string specialization path? Thank you, Rodrigo |
From: Glyn M. <gly...@gm...> - 2008-11-03 19:47:04
|
Hi Rodrigo, 2008/11/3 Rodrigo Madera <rod...@gm...> > > So are you saying that storing a binary block of data (say an image) into a > string is okay? > Boost has a buffer class made for this. If you want integration with boost > you need to use boost. > > It makes no sense at all to put strings into every message. > It's possible to specialize the message to support different containers. Something like this: namespace boost { namespace network { template <> struct string<binary_parser_tag> { typedef boost::array<T, 1024> type; }; }} Or whatever structure suits your needs. See <boost/network/message/traits/string.hpp> Glyn |
From: Rodrigo M. <rod...@gm...> - 2008-11-03 19:29:10
|
On Mon, Nov 3, 2008 at 3:32 AM, Dean Michael Berris <mik...@gm...>wrote: > Hi Rodrigo, > > On Mon, Nov 3, 2008 at 8:14 AM, Rodrigo Madera <rod...@gm...> > wrote: > > > > Basically I see three std::string elements in the message which already > make > > it strange. > > Why is this strange? > > > Is there a reason for them that I don't realize? > > std::string is the easiest container to use that allows both string > semantics and guarantee of contiguous storage (at least as far as I > understand). Any other container would be too unwieldy to use > especially for algorithms that deal with strings of characters. > > BTW, strings can contain binary data just fine (as long as you store > the data (7-bit chars) as is). > So are you saying that storing a binary block of data (say an image) into a string is okay? Boost has a buffer class made for this. If you want integration with boost you need to use boost. It makes no sense at all to put strings into every message. Regards, Rodrigo |
From: Dean M. B. <mik...@gm...> - 2008-11-03 05:33:13
|
Hi Rodrigo, On Mon, Nov 3, 2008 at 8:14 AM, Rodrigo Madera <rod...@gm...> wrote: > > Basically I see three std::string elements in the message which already make > it strange. Why is this strange? > Is there a reason for them that I don't realize? std::string is the easiest container to use that allows both string semantics and guarantee of contiguous storage (at least as far as I understand). Any other container would be too unwieldy to use especially for algorithms that deal with strings of characters. BTW, strings can contain binary data just fine (as long as you store the data (7-bit chars) as is). > Also, what is top-posting and overquoting? Top-posting is what you do (putting your reply at the top of the message), and overquoting is not snipping unnecessary contents of email messages accordingly. [snipped unnecessary email contents to avoid overquoting] -- Dean Michael C. Berris Software Engineer, Friendster, Inc. |
From: Rodrigo M. <rod...@gm...> - 2008-11-03 00:14:14
|
Hey there Dean, Basically I see three std::string elements in the message which already make it strange. Is there a reason for them that I don't realize? Also, what is top-posting and overquoting? Thanks, Rodrigo On Sun, Nov 2, 2008 at 5:36 PM, Dean Michael Berris <mik...@gm...>wrote: > Hi Rodrigo, > > Sorry it took a while for me to respond. Please see in-lined below. > > (BTW, next time please avoid overquoting and top-posting.) > > On Sun, Nov 2, 2008 at 6:06 AM, Rodrigo Madera <rod...@gm...> > wrote: > > Thanks again. > > However the current architecture is not generic enough. It feels like > HTTP > > was the only protocol thought about during it's design. > > That's odd, the message abstraction is very generic -- it doesn't > assume anything about the protocols that will be using it. > > The structure of a message is really very flexible, and using tags you > can even completely revamp the way a basic_message<> will look like > (and how the interface would be) for your context/protocol. > > It would be nice to know why you think the architecture is not generic > enough when being as generic as possible is what the whole library was > designed to be. > > > I'll post my progress here. > > Please do, this will be very interesting to us. > > Thanks and have a good day! > > -- > Dean Michael C. Berris > Software Engineer, Friendster, Inc. > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Cpp-netlib-devel mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cpp-netlib-devel > |