You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(52) |
Jun
(30) |
Jul
(17) |
Aug
(9) |
Sep
(4) |
Oct
(7) |
Nov
(11) |
Dec
(19) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
|
Feb
(1) |
Mar
(37) |
Apr
(28) |
May
(15) |
Jun
(28) |
Jul
(7) |
Aug
(125) |
Sep
(116) |
Oct
(85) |
Nov
(14) |
Dec
(6) |
2009 |
Jan
(11) |
Feb
(4) |
Mar
(5) |
Apr
|
May
(9) |
Jun
(5) |
Jul
(4) |
Aug
(40) |
Sep
(1) |
Oct
(19) |
Nov
(43) |
Dec
(45) |
2010 |
Jan
(76) |
Feb
(95) |
Mar
(3) |
Apr
(23) |
May
(39) |
Jun
(54) |
Jul
(6) |
Aug
(13) |
Sep
(12) |
Oct
(59) |
Nov
(53) |
Dec
(43) |
2011 |
Jan
(43) |
Feb
(44) |
Mar
(25) |
Apr
(23) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(1) |
Nov
(2) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(6) |
Oct
|
Nov
|
Dec
|
From: Jeroen H. <vex...@gm...> - 2009-12-22 19:13:19
|
Hi, Firstly, thank you Dean for merging my current fork. This however has gotten me confused with github, can I still use my current fork to continue development (which I suspect will be broken more often, as bigger changes are coming), or is it easier to create a new fork. If a new fork is easier, what branch should I fork ? Jeroen Habraken |
From: Dean M. B. <mik...@gm...> - 2009-12-21 11:03:12
|
On Sat, Dec 19, 2009 at 10:26 AM, Dean Michael Berris <mik...@gm...> wrote: > > Sweet! Thanks for doing this, it's very much appreciated. > > Looking forward to your progress soon! :) > Okay, now I've pushed the HTTP Server and client refactoring to support HTTPS connections. However, the tests are broken because the Python server for some reason throws up on POST via HTTPS. The GET/HEAD requests are just fine with the HTTPS server but the POST requests for some reason through CGI seem to break. I don't know enough Python kung fu to fix this, so I'm putting it out in the wild. Branch 0.5-devel as of now has the Python test server which is broken. One option is to write an HTTPS server that doesn't use the CGI scripts -- maybe via WSGI so that we can also handle POST/PUT/DELETE/etc.; I'll leave that up to those with enough Python kung fu to figure out. ;) The test for https on the localhost interface is in libs/network/test/https_localhost_test.cpp -- this is almost identical to the http_localhost_tests but we can change these up so that we can see tests that pass on the https requests. If anybody is willing to change up the tests to go in-line with a non-throwing-up Python HTTPS server and tests, you're very much welcome. Have a great day everyone, and I'm looking forward to suggestions and ideas. I'll move on to implementing chunked encoding support on the HTTP/1.1 front. :D -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-19 02:26:58
|
On Sat, Dec 19, 2009 at 7:44 AM, Jeroen Habraken <vex...@gm...> wrote: > On Sat, Dec 19, 2009 at 00:28, Dean Michael Berris >> >> BTW, are you already happy with the implementation you have at github? >> Because if you are, please feel free to submit a pull request so that >> I can merge it to master -- and rebase my 0.5-devel branch too. >> > > The current planning is that I fix the user and password parsing Real > Soon Now, along with some test cases, which can then be merged. That > should at least get us to a point where we are somewhat following the > RFC2396, thus falsely marked valid URI's, but no falsely marked > invalid URI's I hope. Next is reading the newer RFC again -and again > :)-, setting up a lot more test cases, and fixing the generic URI > implementation to follow that. > Sweet! Thanks for doing this, it's very much appreciated. Looking forward to your progress soon! :) -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Jeroen H. <vex...@gm...> - 2009-12-18 23:44:57
|
On Sat, Dec 19, 2009 at 00:28, Dean Michael Berris <mik...@gm...> wrote: > On Sat, Dec 19, 2009 at 12:15 AM, Jeroen Habraken <vex...@gm...> wrote: >> >> This is indeed a bit complex, and it was meant as something to play >> with, so by all means, please do. >> > > Alright, thanks very much! I'll do it within the day or in the next > few days. Please expect it in the 0.5-devel branch. > > BTW, are you already happy with the implementation you have at github? > Because if you are, please feel free to submit a pull request so that > I can merge it to master -- and rebase my 0.5-devel branch too. > The current planning is that I fix the user and password parsing Real Soon Now, along with some test cases, which can then be merged. That should at least get us to a point where we are somewhat following the RFC2396, thus falsely marked valid URI's, but no falsely marked invalid URI's I hope. Next is reading the newer RFC again -and again :)-, setting up a lot more test cases, and fixing the generic URI implementation to follow that. > Thanks and I hope to see the pull request soon! > > -- > Dean Michael Berris > blog.cplusplus-soup.com | twitter.com/mikhailberis > linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com > Jeroen |
From: Dean M. B. <mik...@gm...> - 2009-12-18 23:31:23
|
Hi Jeroen! On Sat, Dec 19, 2009 at 12:33 AM, Jeroen Habraken <vex...@gm...> wrote: > > On Fri, Dec 18, 2009 at 08:07, Dean Michael Berris > <mik...@gm...> wrote: >> Hi Everyone, >> [snip] > > This is some very nice progress, Thanks! > there is one thing that comes to mind > though considering chunked transfers. These can get very large, and > storing them in a string could lead to a lot of reallocations of that > string I believe. Yeah, unless you use an ostringstream which manages memory better than the normal string. > Maybe a rope, > http://en.wikipedia.org/wiki/Rope_(computer_science), is an option as > a data structure to store the chunks. > Yup, but unfortunately they aren't part of the standard C++ library IIRC. > I'll have a proper look at the interface you propose tonight if I have time. Thanks, feedback is very much appreciated! -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-18 23:28:55
|
On Sat, Dec 19, 2009 at 12:15 AM, Jeroen Habraken <vex...@gm...> wrote: > > This is indeed a bit complex, and it was meant as something to play > with, so by all means, please do. > Alright, thanks very much! I'll do it within the day or in the next few days. Please expect it in the 0.5-devel branch. BTW, are you already happy with the implementation you have at github? Because if you are, please feel free to submit a pull request so that I can merge it to master -- and rebase my 0.5-devel branch too. Thanks and I hope to see the pull request soon! -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Jeroen H. <vex...@gm...> - 2009-12-18 16:33:54
|
Hi Dean, On Fri, Dec 18, 2009 at 08:07, Dean Michael Berris <mik...@gm...> wrote: > Hi Everyone, > > So 0.5-devel now has the beginnings of an implementation that will > enable HTTPS connections; the downside to this is that the library now > depends on OpenSSL -- I think we can easily make this a compile-time > switchable dependency by doing preprocessor checks and whatnot, but > I've yet to put those compile-time switches in. > > One of the other crucial things I'm finding right now is the > requirement to handle 'Transfer-Encoding: chunked' on the HTTP/1.1 > client side. This is high up on my TODO list right now and I'm looking > for some ideas and pointers on how others think this should be done. > > Because we don't have a MIME library yet implemented, I'm thinking of > returning the whole body as a single string thunk in the meantime. > > Here are some "random" thoughts I'm surfacing to get feedback from everyone: > > * I'm looking at exposing a stream interface for > basic_response<tags::http_keepalive_8bit_udp_resolve> which holds a > shared_ptr<> to the connection object associated with the call to > get/post, and allows a synchronous buffered pull > * The asynchronous HTTP/1.1 client will support an asynchronous > function callback which would handle individual chunks or a buffered > chunk in a streaming fashion -- I still haven't settled on the > interface to that. > * The future-aware version of the client will return a > future<basic_response<Tag> > which will be built asynchronously on an > active http client. > * Because of chunked encoding, there should be a plugin system that > allows for handling of different types of encoded content; gzipped > data, base64 encoded, utf-8, etc. > > Right now the simple implementation would be to just get the chunks as > they come, build a thunk of string, and then return that. Hopefully > after 0.5 we should be able to implement most of the other important > things that an HTTP/1.1 client should be able to do. > > Another thing that has to be there is support for proxies and cookies, > but those can be auxiliary to the HTTP client. A cookie registry can > be implemented outside of the HTTP client and will be able to add the > appropriate cookies to HTTP requests for a certain domain, hopefully > through the directives interface we already expose and support. > > In the next few days I'll finish up on some implementation details and > the HTTPS support over HTTP/1.1 and HTTP/1.0. Testing help and > thoughts would be very much appreciated. > > BTW, I've written up the Wiki page for the tags that I intend to > implement: http://wiki.github.com/mikhailberis/cpp-netlib/tags . Help > and feedback would be very much appreciated. > > -- > Dean Michael Berris > blog.cplusplus-soup.com | twitter.com/mikhailberis > linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com > This is some very nice progress, there is one thing that comes to mind though considering chunked transfers. These can get very large, and storing them in a string could lead to a lot of reallocations of that string I believe. Maybe a rope, http://en.wikipedia.org/wiki/Rope_(computer_science), is an option as a data structure to store the chunks. I'll have a proper look at the interface you propose tonight if I have time. Jeroen Habraken |
From: Jeroen H. <vex...@gm...> - 2009-12-18 16:16:03
|
On Fri, Dec 18, 2009 at 00:40, Dean Michael Berris <mik...@gm...> wrote: > On Fri, Dec 18, 2009 at 7:26 AM, Jeroen Habraken <vex...@gm...> wrote: >> On Thu, Dec 17, 2009 at 02:44, Dean Michael Berris >>> >>> Anybody have any ideas how I can easily write tests for HTTPS? >>> >> >> Hi Dean, >> >> I've attached the simplest HTTPS server I could come up with in >> Python, hard-coded to listen on 127.0.0.1:8443, as well as a >> certificate pair generated as described at >> http://panoptic.com/wiki/aolserver/How_to_generate_self-signed_SSL_certificates, >> valid for the next 65536 days. Notice that it needs pyopenssl to >> function. >> > > Nice! Thanks for these. :) > > If you don't mind though, can you branch off 0.5-devel and integrate > this there? While I can integrate that from here, it seems a lot > "cleaner" if you submit it through github. If that's too complex, then > I'll play with this one and put appropriate credits where it matters. > > Thanks again! > > -- > Dean Michael Berris > blog.cplusplus-soup.com | twitter.com/mikhailberis > linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com > Hi Dean, This is indeed a bit complex, and it was meant as something to play with, so by all means, please do. Jeroen Habraken |
From: Dean M. B. <mik...@gm...> - 2009-12-18 07:07:37
|
Hi Everyone, So 0.5-devel now has the beginnings of an implementation that will enable HTTPS connections; the downside to this is that the library now depends on OpenSSL -- I think we can easily make this a compile-time switchable dependency by doing preprocessor checks and whatnot, but I've yet to put those compile-time switches in. One of the other crucial things I'm finding right now is the requirement to handle 'Transfer-Encoding: chunked' on the HTTP/1.1 client side. This is high up on my TODO list right now and I'm looking for some ideas and pointers on how others think this should be done. Because we don't have a MIME library yet implemented, I'm thinking of returning the whole body as a single string thunk in the meantime. Here are some "random" thoughts I'm surfacing to get feedback from everyone: * I'm looking at exposing a stream interface for basic_response<tags::http_keepalive_8bit_udp_resolve> which holds a shared_ptr<> to the connection object associated with the call to get/post, and allows a synchronous buffered pull * The asynchronous HTTP/1.1 client will support an asynchronous function callback which would handle individual chunks or a buffered chunk in a streaming fashion -- I still haven't settled on the interface to that. * The future-aware version of the client will return a future<basic_response<Tag> > which will be built asynchronously on an active http client. * Because of chunked encoding, there should be a plugin system that allows for handling of different types of encoded content; gzipped data, base64 encoded, utf-8, etc. Right now the simple implementation would be to just get the chunks as they come, build a thunk of string, and then return that. Hopefully after 0.5 we should be able to implement most of the other important things that an HTTP/1.1 client should be able to do. Another thing that has to be there is support for proxies and cookies, but those can be auxiliary to the HTTP client. A cookie registry can be implemented outside of the HTTP client and will be able to add the appropriate cookies to HTTP requests for a certain domain, hopefully through the directives interface we already expose and support. In the next few days I'll finish up on some implementation details and the HTTPS support over HTTP/1.1 and HTTP/1.0. Testing help and thoughts would be very much appreciated. BTW, I've written up the Wiki page for the tags that I intend to implement: http://wiki.github.com/mikhailberis/cpp-netlib/tags . Help and feedback would be very much appreciated. -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-17 23:41:17
|
On Fri, Dec 18, 2009 at 7:26 AM, Jeroen Habraken <vex...@gm...> wrote: > On Thu, Dec 17, 2009 at 02:44, Dean Michael Berris >> >> Anybody have any ideas how I can easily write tests for HTTPS? >> > > Hi Dean, > > I've attached the simplest HTTPS server I could come up with in > Python, hard-coded to listen on 127.0.0.1:8443, as well as a > certificate pair generated as described at > http://panoptic.com/wiki/aolserver/How_to_generate_self-signed_SSL_certificates, > valid for the next 65536 days. Notice that it needs pyopenssl to > function. > Nice! Thanks for these. :) If you don't mind though, can you branch off 0.5-devel and integrate this there? While I can integrate that from here, it seems a lot "cleaner" if you submit it through github. If that's too complex, then I'll play with this one and put appropriate credits where it matters. Thanks again! -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Jeroen H. <vex...@gm...> - 2009-12-17 23:27:00
|
On Thu, Dec 17, 2009 at 02:44, Dean Michael Berris <mik...@gm...> wrote: > Hi Everyone, > > I'm currently working on getting HTTPS somewhat working with very > simple queries, and right now I'm looking at a test server set up > locally. I'm not sure how to do this with Python if it's even > feasible, nor do I know any public HTTPS servers that I can make a > test HTTPS call against (with the issue of validation, etc. of > certificates and whatnot). > > Anybody have any ideas how I can easily write tests for HTTPS? > > -- > Dean Michael Berris > blog.cplusplus-soup.com | twitter.com/mikhailberis > linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com > Hi Dean, I've attached the simplest HTTPS server I could come up with in Python, hard-coded to listen on 127.0.0.1:8443, as well as a certificate pair generated as described at http://panoptic.com/wiki/aolserver/How_to_generate_self-signed_SSL_certificates, valid for the next 65536 days. Notice that it needs pyopenssl to function. Jeroen |
From: Dean M. B. <mik...@gm...> - 2009-12-17 01:53:24
|
Hi Everyone, I'm currently working on getting HTTPS somewhat working with very simple queries, and right now I'm looking at a test server set up locally. I'm not sure how to do this with Python if it's even feasible, nor do I know any public HTTPS servers that I can make a test HTTPS call against (with the issue of validation, etc. of certificates and whatnot). Anybody have any ideas how I can easily write tests for HTTPS? -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-15 23:38:46
|
Hi Jeroen! On Wed, Dec 16, 2009 at 7:31 AM, Jeroen Habraken <vex...@gm...> wrote: > Hi, > > A quick update on the URI parsing. I've forked the project and just > switched to implementing RFC3986 > <http://www.ietf.org/rfc/rfc3986.txt>, which obsoletes the older > RFC2396, and started to use the naming of sub-rules given there. The > parsing of absolute URI paths now works according to the RFC I > believe, the user_info is to be tackled next. Whenever possible I'll > try to keep the fork in a state where it compiles -at least on my > machine, as far as I can test-. > Sweet, thanks very much for doing this! > I'm also under the impression the new RFC is more explicit when it > comes to general URI's, thus some work now done explicitly for the > HTTP URI implementation might be moved over. Time for some sleep now > though. > Sounds good. Thanks again! -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Jeroen H. <vex...@gm...> - 2009-12-15 23:32:08
|
Hi, A quick update on the URI parsing. I've forked the project and just switched to implementing RFC3986 <http://www.ietf.org/rfc/rfc3986.txt>, which obsoletes the older RFC2396, and started to use the naming of sub-rules given there. The parsing of absolute URI paths now works according to the RFC I believe, the user_info is to be tackled next. Whenever possible I'll try to keep the fork in a state where it compiles -at least on my machine, as far as I can test-. I'm also under the impression the new RFC is more explicit when it comes to general URI's, thus some work now done explicitly for the HTTP URI implementation might be moved over. Time for some sleep now though. Jeroen |
From: Dean M. B. <mik...@gm...> - 2009-12-15 00:52:46
|
On Tue, Dec 15, 2009 at 2:42 AM, Dean Michael Berris <mik...@gm...> wrote: > On Tue, Dec 15, 2009 at 12:52 AM, Glyn Matthews <gly...@gm...> wrote: >> >> >> IMO, we shouldn't be afraid to break backwards compatibility at this stage >> in development if the improvements are really valid. >> > > Indeed. :) > And this is done on 0.5-devel -- I'll just get some sleep and when I wake I'll work on adding HTTPS support on both the client and the server. http://github.com/mikhailberis/cpp-netlib/commit/ec5996d260a9a506a59ecf4539dd4693a089b09a is the commit that breaks backwards compatibility. The recommended usage for the HTTP client now looks like: http::client client_; http::client::request request_("http://foo.bar/"); http::client::response response_ = client_.get(request_); On the server, it looks like this: struct foo { void log(...) {} void operator()( basic_request<tags::http_server> const & request_, basic_response<tags::http_server> & response_ ) { // ... } }; http::server<foo> foo_server; Have a good day everyone, and I hope this helps! -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-15 00:34:51
|
On Tue, Dec 15, 2009 at 3:26 AM, Jeroen Habraken <vex...@gm...> wrote: > On Mon, Dec 14, 2009 at 17:45, Dean Michael Berris > <mik...@gm...> wrote: >> >> I'd say a function that interprets the query part doesn't have to be >> part of the uri interface. I'd accept a function which would parse out >> the the .query() part of the uri into a >> list<pair<string_type,string_type> > that looks like this: >> >> list<pair<string,string> > query_list_ = query_list(uri); >> >> This should be available via ADL and only for http uri's. The >> simplicity of the HttpUri concept should be preserved. Also, I'd >> imagine the query_list(...) function to dispatch to a specific >> implementation based on the tag associated with the uri. This is so >> that we can change the result type based on the tag associated with >> the uri too, so for example instead of a list<pair<string,string> > we >> can return a generator function, an input iterator, a >> multimap<string,string>, or a tuple. > > This is indeed a better option, it keeps the URI parser relatively > simpler and prevents unnecessary work, whilst we still provide this > functionality, consider it on my TODO list. > Cool. :) >> >> I don't understand what the concern is. Is there a problem you're >> seeing with the current approach? > > I should have been more clear here, normally when URI decoding a > string quite a few things can go wrong, imagine incorrect characters > '%2S', or it simply being too short, '%2'. A URI will not be parsed as > valid if such cases occur, making URI decoding relatively trivial as > you won't have to deal with such cases. We should provide a fully > functional uri_decode function though, capable of handling any input. > There actually turns out to be a fine implementation in the Boost.Asio > examples already, it's in > libs/asio/example/http/server/request_handler.cpp as url_decode. > Right, but that is the reason why the query part of the parser was very lax was that it should be externally dealt with through different functions. Although the validity check as far as RFC strictness is concerned is good, decoding the URI elements should be secondary to the role of the parser which is supposed to just identify which part of the URI is what. I have been looking for a way of easily encoding and decoding URI's with Boost.Spirit's Qi and Karma libraries. Maybe one day when I find the need for that functionality I'll write it -- or if you want to take on that implementation, then it should be something you can take on as well. :D >> Also, please modify the patch to include your copyright information at >> the top of the files you modify. :) >> >> Thanks again! > > You're most welcome. I've just created a github account, and think > forking the master and working from there is the easiest option. > Indeed. :) Have a great day and week ahead! -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Jeroen H. <vex...@gm...> - 2009-12-14 19:27:28
|
On Mon, Dec 14, 2009 at 17:45, Dean Michael Berris <mik...@gm...> wrote: > Hi Jeroen, > > On Mon, Dec 14, 2009 at 9:12 PM, Jeroen Habraken <vex...@gm...> wrote: >> On Mon, Dec 14, 2009 at 13:03, Dean Michael Berris >> <mik...@gm...> wrote: >>> >>> Cool, please either fork the library on Github or send a git patch >>> later on. I will be freezing the Subversion repository tomorrow. >>> >>> Have a good day! >>> >> >> I've decided to roll an initial patch, please find it attached. It >> fixes the following: >> - stricter RFC compliant parsing of the scheme, in the generic URI >> - It converts the scheme to lower case, as it states the following in >> the RFC, "For resiliency, programs interpreting URI should treat upper >> case letters as equivalent to lower case in scheme names" >> - I've changes the parser of the port to use ushort_ and uint16_t, the >> RFC specifies the port as *digit, but I think it should be limited to >> the valid network ports, thus 0 <= port <= 2**16 >> - The query and fragment are now parsed conform to the RFC I believe, > > Thanks very much for this! > >> I'd like to change this later to parse the query into a >> std::list<std::pair<string_type, string_type> > >> > > I'd say a function that interprets the query part doesn't have to be > part of the uri interface. I'd accept a function which would parse out > the the .query() part of the uri into a > list<pair<string_type,string_type> > that looks like this: > > list<pair<string,string> > query_list_ = query_list(uri); > > This should be available via ADL and only for http uri's. The > simplicity of the HttpUri concept should be preserved. Also, I'd > imagine the query_list(...) function to dispatch to a specific > implementation based on the tag associated with the uri. This is so > that we can change the result type based on the tag associated with > the uri too, so for example instead of a list<pair<string,string> > we > can return a generator function, an input iterator, a > multimap<string,string>, or a tuple. This is indeed a better option, it keeps the URI parser relatively simpler and prevents unnecessary work, whilst we still provide this functionality, consider it on my TODO list. >> Note that the way the current parser works, it guarantees that if the >> URI is valid, the URI decoding can do with a lot less checks, I don't >> know whether this is a good idea though. >> > > I don't understand what the concern is. Is there a problem you're > seeing with the current approach? I should have been more clear here, normally when URI decoding a string quite a few things can go wrong, imagine incorrect characters '%2S', or it simply being too short, '%2'. A URI will not be parsed as valid if such cases occur, making URI decoding relatively trivial as you won't have to deal with such cases. We should provide a fully functional uri_decode function though, capable of handling any input. There actually turns out to be a fine implementation in the Boost.Asio examples already, it's in libs/asio/example/http/server/request_handler.cpp as url_decode. > Also, please modify the patch to include your copyright information at > the top of the files you modify. :) > > Thanks again! You're most welcome. I've just created a github account, and think forking the master and working from there is the easiest option. Jeroen 'VeXocide' Habraken > -- > Dean Michael Berris > blog.cplusplus-soup.com | twitter.com/mikhailberis > linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-14 18:43:16
|
On Tue, Dec 15, 2009 at 12:52 AM, Glyn Matthews <gly...@gm...> wrote: > > 2009/12/14 Dean Michael Berris <mik...@gm...> >> >> >> Hmmm... That's odd. GCC 4.4 doesn't complain with the use case. I >> think this is still valid if it's not a POD because it allows for >> "static" initialization. I remember std::string can be statically >> initialized (as in, during compile time) which is why this works. > > I'm not a language lawyer but as I understand it PODs have a trivial default > constructor, which doesn't apply to `std::string`. But as you explain, > maybe you don't need it to be a POD to work. Right. Not a language lawyer here too -- I trust what the compiler and the tests say. Until I get my own copy of the C++ standard, I'll say "if the tests say it's OK, then it should be OK". :) >> >> > I think nested classes could be a better approach, they will use the >> > same >> > tags as the server class anyway. >> > >> >> I agree. However that would break code that's already using >> http::request for the client -- unless i typedef http::request to be >> by default http::basic_client<...>::request which is just ugly. That >> said, I think we can still afford to break backwards compatibility >> because, well, we're header only -- and breaking changes will cause >> users to actively upgrade their usage. <insert evil laughter here> :D >> > > IMO, we shouldn't be afraid to break backwards compatibility at this stage > in development if the improvements are really valid. > Indeed. :) > Regards, Thanks! :D -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Glyn M. <gly...@gm...> - 2009-12-14 16:52:52
|
Hi, 2009/12/14 Dean Michael Berris <mik...@gm...> > On Mon, Dec 14, 2009 at 9:16 PM, Glyn Matthews <gly...@gm...> > wrote: > > > > 2009/12/14 Dean Michael Berris <mik...@gm...> > >> > >> > >> Right. The request_header is definitely just a struct (or a POD). :) > > > > I think it's not a POD because it contains members that are of type > > `std::string`. > > > > Hmmm... That's odd. GCC 4.4 doesn't complain with the use case. I > think this is still valid if it's not a POD because it allows for > "static" initialization. I remember std::string can be statically > initialized (as in, during compile time) which is why this works. > I'm not a language lawyer but as I understand it PODs have a trivial default constructor, which doesn't apply to `std::string`. But as you explain, maybe you don't need it to be a POD to work. > > > I think nested classes could be a better approach, they will use the same > > tags as the server class anyway. > > > > I agree. However that would break code that's already using > http::request for the client -- unless i typedef http::request to be > by default http::basic_client<...>::request which is just ugly. That > said, I think we can still afford to break backwards compatibility > because, well, we're header only -- and breaking changes will cause > users to actively upgrade their usage. <insert evil laughter here> :D > > IMO, we shouldn't be afraid to break backwards compatibility at this stage in development if the improvements are really valid. Regards, G |
From: Dean M. B. <mik...@gm...> - 2009-12-14 16:45:35
|
Hi Jeroen, On Mon, Dec 14, 2009 at 9:12 PM, Jeroen Habraken <vex...@gm...> wrote: > On Mon, Dec 14, 2009 at 13:03, Dean Michael Berris > <mik...@gm...> wrote: >> >> Cool, please either fork the library on Github or send a git patch >> later on. I will be freezing the Subversion repository tomorrow. >> >> Have a good day! >> > > I've decided to roll an initial patch, please find it attached. It > fixes the following: > - stricter RFC compliant parsing of the scheme, in the generic URI > - It converts the scheme to lower case, as it states the following in > the RFC, "For resiliency, programs interpreting URI should treat upper > case letters as equivalent to lower case in scheme names" > - I've changes the parser of the port to use ushort_ and uint16_t, the > RFC specifies the port as *digit, but I think it should be limited to > the valid network ports, thus 0 <= port <= 2**16 > - The query and fragment are now parsed conform to the RFC I believe, Thanks very much for this! > I'd like to change this later to parse the query into a > std::list<std::pair<string_type, string_type> > > I'd say a function that interprets the query part doesn't have to be part of the uri interface. I'd accept a function which would parse out the the .query() part of the uri into a list<pair<string_type,string_type> > that looks like this: list<pair<string,string> > query_list_ = query_list(uri); This should be available via ADL and only for http uri's. The simplicity of the HttpUri concept should be preserved. Also, I'd imagine the query_list(...) function to dispatch to a specific implementation based on the tag associated with the uri. This is so that we can change the result type based on the tag associated with the uri too, so for example instead of a list<pair<string,string> > we can return a generator function, an input iterator, a multimap<string,string>, or a tuple. > Note that the way the current parser works, it guarantees that if the > URI is valid, the URI decoding can do with a lot less checks, I don't > know whether this is a good idea though. > I don't understand what the concern is. Is there a problem you're seeing with the current approach? Also, please modify the patch to include your copyright information at the top of the files you modify. :) Thanks again! -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-14 16:34:50
|
Hi Everyone, just now I've pushed some more refactoring steps to move out the resolving and connection management and dispatch based on the tags. I might fix things around a little still just to make the organization of the policies a little more palatable. Right now it resembles a linear inheritance graph instead of a flat mixin-style policy usage structure. Let me work on it a little more to get a cleaner decoupling of the policies. Questions, comments, suggestions, and patches would be most welcome. HTH -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Dean M. B. <mik...@gm...> - 2009-12-14 16:34:04
|
On Mon, Dec 14, 2009 at 9:16 PM, Glyn Matthews <gly...@gm...> wrote: > > 2009/12/14 Dean Michael Berris <mik...@gm...> >> >> >> Right. The request_header is definitely just a struct (or a POD). :) > > I think it's not a POD because it contains members that are of type > `std::string`. > Hmmm... That's odd. GCC 4.4 doesn't complain with the use case. I think this is still valid if it's not a POD because it allows for "static" initialization. I remember std::string can be statically initialized (as in, during compile time) which is why this works. >> >> I'm still thinking about moving the request/response types as nested >> types to the http::basic_client<...> and http::server<...> instead of >> namespace-level types, or merging them to work for both the client and >> the server. At the worst case I would make different specializations >> based on the tag and type-defining them as different types >> server_request/server_response at the namespace level. I find it a >> little ugly in the C++ world of namespaces, but if you have other >> ideas about making the naming convention more consistent I'm all ears. >> :D >> > > Why have http::basic_client and not http::basic_server? > Was still thinking about that. I think I would implement an http::basic_server if I can do something like tag dispatching based on features to be supported by the basic_server specialization. > I think nested classes could be a better approach, they will use the same > tags as the server class anyway. > I agree. However that would break code that's already using http::request for the client -- unless i typedef http::request to be by default http::basic_client<...>::request which is just ugly. That said, I think we can still afford to break backwards compatibility because, well, we're header only -- and breaking changes will cause users to actively upgrade their usage. <insert evil laughter here> :D -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |
From: Glyn M. <gly...@gm...> - 2009-12-14 13:16:54
|
Dean, 2009/12/14 Dean Michael Berris <mik...@gm...> > > > > `hello_world.cpp` can be repeated in `examples/server/hello_world.cpp` > > (where it can therefore be documented in quickbook). I think that's a > > really nice example, an HTTP server in C++ in less than 40 lines ;) > Though > > the `request_header` type is either a struct or it's using a C++0x > > initializer list. > > > > > Right. The request_header is definitely just a struct (or a POD). :) > I think it's not a POD because it contains members that are of type `std::string`. > > I'm still thinking about moving the request/response types as nested > types to the http::basic_client<...> and http::server<...> instead of > namespace-level types, or merging them to work for both the client and > the server. At the worst case I would make different specializations > based on the tag and type-defining them as different types > server_request/server_response at the namespace level. I find it a > little ugly in the C++ world of namespaces, but if you have other > ideas about making the naming convention more consistent I'm all ears. > :D > > Why have http::basic_client and not http::basic_server? I think nested classes could be a better approach, they will use the same tags as the server class anyway. Have a great day everyone, and Glyn please look forward to a Wiki page > on tags and descriptions soon. > Thanks, G |
From: Jeroen H. <vex...@gm...> - 2009-12-14 13:12:52
|
On Mon, Dec 14, 2009 at 13:03, Dean Michael Berris <mik...@gm...> wrote: > Hi Jeroen, > > On Mon, Dec 14, 2009 at 5:42 PM, Jeroen Habraken <vex...@gm...> wrote: >> On Mon, Dec 14, 2009 at 10:20, Glyn Matthews <gly...@gm...> wrote: > [snip] >> >> I'm currently working on URI, and the HTTP part in specific, trying to >> make it more strict, RFC compliant. The query and fragments should be >> working now, the path is still a bit of a pain. I'll keep you up to >> date, and expect a patch sometime soon :) >> > > Cool, please either fork the library on Github or send a git patch > later on. I will be freezing the Subversion repository tomorrow. > > Have a good day! > > -- > Dean Michael Berris > blog.cplusplus-soup.com | twitter.com/mikhailberis > linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com > > ------------------------------------------------------------------------------ > Return on Information: > Google Enterprise Search pays you back > Get the facts. > http://p.sf.net/sfu/google-dev2dev > _______________________________________________ > Cpp-netlib-devel mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cpp-netlib-devel > Hi, I've decided to roll an initial patch, please find it attached. It fixes the following: - stricter RFC compliant parsing of the scheme, in the generic URI - It converts the scheme to lower case, as it states the following in the RFC, "For resiliency, programs interpreting URI should treat upper case letters as equivalent to lower case in scheme names" - I've changes the parser of the port to use ushort_ and uint16_t, the RFC specifies the port as *digit, but I think it should be limited to the valid network ports, thus 0 <= port <= 2**16 - The query and fragment are now parsed conform to the RFC I believe, I'd like to change this later to parse the query into a std::list<std::pair<string_type, string_type> > Note that the way the current parser works, it guarantees that if the URI is valid, the URI decoding can do with a lot less checks, I don't know whether this is a good idea though. Jeroen |
From: Dean M. B. <mik...@gm...> - 2009-12-14 12:38:35
|
Hi Glyn, On Mon, Dec 14, 2009 at 5:20 PM, Glyn Matthews <gly...@gm...> wrote: > Hi Dean, > > 2009/12/14 Dean Michael Berris <mik...@gm...> >> >> Next I'll be working on the connection management policy that >> dispatches based on the HTTP versions and the tag parameter. The >> connection management policy will look like the following: >> >> HTTP Version | Tag | Behavior >> HTTP 1.1 | http_default_8bit_*_resolver | All requests default to >> 'Connection: close", one request one connection. >> HTTP 1.1 | http_keepalive_8bit_*_resolver | For a given host, >> connections are persistent and re-usable, unless server sends >> 'Connection: close' to a response. >> HTTP 1.1 | http_futures_8bit_*_resolver | Client becomes active >> object, results are future<basic_response<Tag> >, one request one >> connection. >> HTTP 1.1 | http_futures_pooled_8bit_*_resolver | Client becomes >> active object, results are future<basic_response<Tag> >, connections >> are pooled 2 per host. >> HTTP 1.1 | http_async_8bit_*_resolver | Client becomes active >> object, requests will have a function object parameter which handles >> streaming data, one connection per request. >> HTTP 1.1 | http_async_pooled_8bit_*_resolver | Client becomes >> active, requests will have a function object parameter which handles >> streaming data, connections are pooled 2 per host. >> > > This is difficult to read in my e-mail program. In any case, I think this > belongs on the wiki. It would be useful to provide a catalogue of all tags > on the wiki and eventually in the quickbook docs. > Indeed. I should really use the Wiki more. I'll do this on the Trac Wiki in the meantime. I was writing this half asleep and now that I think about it I should have used an outline instead. :D >> >> Questions, comments, suggestions, and contributions would be very much >> welcome. >> > > `hello_world.cpp` can be repeated in `examples/server/hello_world.cpp` > (where it can therefore be documented in quickbook). I think that's a > really nice example, an HTTP server in C++ in less than 40 lines ;) Though > the `request_header` type is either a struct or it's using a C++0x > initializer list. > Right. The request_header is definitely just a struct (or a POD). :) I'm still thinking about moving the request/response types as nested types to the http::basic_client<...> and http::server<...> instead of namespace-level types, or merging them to work for both the client and the server. At the worst case I would make different specializations based on the tag and type-defining them as different types server_request/server_response at the namespace level. I find it a little ugly in the C++ world of namespaces, but if you have other ideas about making the naming convention more consistent I'm all ears. :D Have a great day everyone, and Glyn please look forward to a Wiki page on tags and descriptions soon. -- Dean Michael Berris blog.cplusplus-soup.com | twitter.com/mikhailberis linkedin.com/in/mikhailberis | facebook.com/dean.berris | deanberris.com |