You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(23) |
Nov
(34) |
Dec
(36) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(6) |
Feb
(1) |
Mar
(12) |
Apr
|
May
(3) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2003 |
Jan
|
Feb
(6) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2004 |
Jan
|
Feb
|
Mar
(10) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
(2) |
Feb
(3) |
Mar
|
Apr
(9) |
May
(17) |
Jun
(14) |
Jul
(13) |
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
(5) |
2006 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
|
Aug
|
Sep
(1) |
Oct
(16) |
Nov
(5) |
Dec
|
2007 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(4) |
Dec
|
2008 |
Jan
(14) |
Feb
(5) |
Mar
(7) |
Apr
(3) |
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
(6) |
Mar
(9) |
Apr
(2) |
May
(1) |
Jun
|
Jul
|
Aug
(17) |
Sep
(2) |
Oct
(1) |
Nov
(4) |
Dec
|
2010 |
Jan
|
Feb
(3) |
Mar
(21) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
2011 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
(5) |
Jul
(23) |
Aug
(7) |
Sep
|
Oct
|
Nov
|
Dec
(9) |
2012 |
Jan
(7) |
Feb
(9) |
Mar
(2) |
Apr
(2) |
May
(5) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(9) |
Oct
|
Nov
(3) |
Dec
(2) |
2013 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(1) |
May
(1) |
Jun
(4) |
Jul
(4) |
Aug
(6) |
Sep
(15) |
Oct
(7) |
Nov
(3) |
Dec
(2) |
2014 |
Jan
(1) |
Feb
|
Mar
(7) |
Apr
(2) |
May
(8) |
Jun
|
Jul
|
Aug
(4) |
Sep
(1) |
Oct
(4) |
Nov
(2) |
Dec
(2) |
2015 |
Jan
(6) |
Feb
(1) |
Mar
|
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(7) |
Dec
|
2016 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Patrick M D. <pa...@wa...> - 2001-12-16 18:23:05
|
On Wed, 12 Dec 2001, Jerome Vouillon wrote: > On Tue, Dec 11, 2001 at 11:36:19PM +0100, Gerd Stolpmann wrote: > > > Maybe it is possible that both libraries can cooperate? My idea is that > > one library as a whole is a thread under the control of the other library. > > Maybe we need real (Posix or bytecode) threads to do it, but this would > > not hurt if only one thread was active at a time. > > It should not be too hard to rewrite the Lwt_unix library on top of > Equeue. I fear that I lack the technical understanding to help with this effort, but it would be a great benefit for the project to see support for both of these programming approaches. Patrick |
From: Markus M. <ma...@oe...> - 2001-12-16 18:21:04
|
Gerd Stolpmann schrieb am Sonntag, den 16. Dezember 2001: > because ocamlnet depends on PCRE, I have tried to compile PCRE (4.8-1) as > DLL. It works as follows: The newest version of PCRE for OCaml (4.10-1) uses an upgraded version of Phil Hazel's PCRE-library. I am not sure whether this can make a difference. > This make will fail (it cannot make pcretest). Look into pcre-C/.libs, > and check if there is libpcre.so.0.0.1. If so, the part of the build we > need has succeeded. Everything's fine up to here. > Now: > > cp .libs/libpcre.so.0.0.1 ../pcre-OCaml/dllpcre.so > cp .libs/libpcre.a ../pcre-OCaml > cd ../pcre-OCaml > ocamlc pcre.mli > ocamlc pcre.ml > ocamlopt pcre.ml Shouldn't this be: ocamlc -c pcre.ml ocamlopt -c pcre.ml > ocamlmklib -o pcre pcre.cma > ocamlmklib -o pcre pcre.cmx This doesn't seem to work: pcre.cma would obviously be overwritten during reading. I am not sure what mode of installation the users would prefer most. Maybe it would be better to have two separate shared libraries, one for the C-library and another for the stubs? I have just taken my first look at the documentation of the new ocamlmklib-tool and it would probably work more smoothly this way. If anybody manages to improve the build process such that a shared PCRE-library is built automatically (if possible) in a portable way, I'd be glad to add this to the PCRE-distribution... :) Regards, Markus Mottl -- Markus Mottl ma...@oe... Austrian Research Institute for Artificial Intelligence http://www.oefai.at/~markus |
From: Patrick M D. <pa...@wa...> - 2001-12-16 18:19:03
|
On Wed, 12 Dec 2001, Gerd Stolpmann wrote: > Guillaume aked me whether netchannels will support datagrams. This is > not only a question of supporting UDP-style protocols, but every > message-oriented protocol. > > I am thinking about a parameterized class: > > class ['a] out_obj_message_service : > object > method send : 'a -> unit > method flush : unit -> unit > method close_out : unit -> unit > end > > class ['a] in_obj_message_service : > object > method receive : unit -> 'a > (* raises End_of_file if no more message available *) > method close_in : unit -> unit > end This seems like a good idea, is your intention that receive will be blocking or non-blocking in the case that no message is currently available. The End_of_file condition is appropriate for when the service has actually closed. If the behavior is to be non-blocking, then we will need some additional machinery to simplify the job of writing clients. > For UDP, we could use 'a = string * sockaddr (the packet and > the address of the peer). > > There are many stream-oriented protocols that send messages > sequentially over the connection. For these 'a is the decoded > message. > > What do you think about this idea? It will be very useful to standardize these concepts. Another one to consider is an API for state models. This would allow user code to execute in response to a transition, entering/exiting particular states or substates, etc. Patrick |
From: Patrick M D. <pa...@wa...> - 2001-12-16 17:55:07
|
Yes, a patch is a good idea, but so is a new release soon--the jserv code would be of use to many people. You recently sent an e-mail with a list of changes to netstring for improved netchannel support. I would suggest that we finish these items for a 0.92 release. I'll be working on some of those items today. After that, I would like to see 0.93 focus on standardized support for I/O multiplexing. On Sun, 16 Dec 2001, Gerd Stolpmann wrote: > Hi list, > > there seems to be a small problem with the released ocamlnet-0.91 tarball > and O'Caml 3.04. It can be solved by removing a label from a function > application. I have prepared a patch (attached to this mail). > > I am going to simply release this patch instead of making a new tarball. > Opinions? > > Gerd > -- > ---------------------------------------------------------------------------- > Gerd Stolpmann Telefon: +49 6151 997705 (privat) > Viktoriastr. 45 > 64293 Darmstadt EMail: ge...@ge... > Germany > ---------------------------------------------------------------------------- > |
From: Gerd S. <in...@ge...> - 2001-12-16 17:06:19
|
Hi list, because ocamlnet depends on PCRE, I have tried to compile PCRE (4.8-1) as DLL. It works as follows: - Add the following lines to pcre-C/Makefile: pcre_intf.o: pcre.h $(LTCOMPILE) $(top_srcdir)/pcre_intf.c Change the lines defining OBJ and LOBJ: OBJ = maketables.o get.o study.o pcre.o pcre_intf.o LOBJ = maketables.lo get.lo study.lo pcre.lo pcre_intf.lo - Execute the commands: cd pcre-C ./configure --enable-shared cp ../pcre-OCaml/pcre_intf.c . make CFLAGS+=-I`ocamlc -where` CFLAGS+='-I.' This make will fail (it cannot make pcretest). Look into pcre-C/.libs, and check if there is libpcre.so.0.0.1. If so, the part of the build we need has succeeded. Now: cp .libs/libpcre.so.0.0.1 ../pcre-OCaml/dllpcre.so cp .libs/libpcre.a ../pcre-OCaml cd ../pcre-OCaml ocamlc pcre.mli ocamlc pcre.ml ocamlopt pcre.ml ocamlmklib -o pcre pcre.cma ocamlmklib -o pcre pcre.cmx Now install pcre.cmi, pcre.cma, pcre.cmxa, pcre.a, dllpcre.so, libpcre.a, and make sure that the directory containing dllpcre.so is listed in <stdlib>/ld.conf. If you have findlib, do it this way: META file is as follows: requires = "" description = "Perl compatible regular expressions" version = "4.8-1" archive(byte) = "pcre.cma" archive(native) = "pcre.cmxa" Install with: ocamlfind install pcre *.cmi *.mli *.cma *.cmxa *.a dll* META findlib >= 0.6.1 will update ld.conf automatically. Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Gerd S. <in...@ge...> - 2001-12-16 16:12:31
|
Hi list, there seems to be a small problem with the released ocamlnet-0.91 tarball and O'Caml 3.04. It can be solved by removing a label from a function application. I have prepared a patch (attached to this mail). I am going to simply release this patch instead of making a new tarball. Opinions? Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Patrick M D. <pa...@wa...> - 2001-12-12 22:50:52
|
On Wed, 12 Dec 2001, Gerd Stolpmann wrote: > On 2001.12.12 03:49 Patrick M Doane wrote: > > > > Adding another method to the out_obj_channel type is one more that > > must be implemented for all satisfying implementations. This is not an > > objection to adding the method, but something to keep in mind. > > We could have a super class with convenience functions: > > class virtual out_obj_common_mixin = > object(self) > method virtual output : string -> int -> int -> unit > method output_netbuffer b = > self # output (Netbuffer.unsafe_buffer b) 0 (Netbuffer.length b) > end > > and then inherit from this class. Convenience functions are important > enough. This is a good plan. We should consider moving other derived methods to that as well. > > Do you see value in a Netchannel object that more closely models a pipe > > for multi-threaded applications? > > The intention of in_out_pair is to help scanners (in conjunction with > Netstream), i.e. you form a netstream on top of an in_out_pair. You > can put data into the in_out_pair and check if it is already enough > to recognize the next token. If it is not enough, the netstream will > try to enlarge the lookahead window, and if it is not possible > because the in_out_pair is empty, the Refill exception will be raised, > indicating that it is not yet possible to get another token. Yes, this makes sense. I have thought about providing similar behavior in the Pop module when retrieving messages from the server. With that protocol, messages are mostly unencoded and marked with a single '.' followed by an empty line to end the file. Any line beginning with a '.' in the message is preceded by another '.'. In this case though, the proper behavior when reading data from the "pipe" is to block (at least while the protocol is written in a blocking style). > The details of this technique are quite tricky, but such a > feature is frequently needed. I would imagine a line-based transformer would also be a fairly frequent concept. Do you see it as layering on top of the in_out_pair or as a separate concept? > I think we do not need a close pipe object, because we can use > real pipes. Is it possible to setup such pipes on top of sockets for Windows? I know there are some difficulties there but I am not aware of the details. > > > - Neturl: > > > > > > * Update to newer RFC (if possible?) > > > > We had some earlier discussions about this and, if I recall, had some > > slightly different expectations. Since the newer RFC describes URIs > > instead of URLs, would you like to keep the older module as-is and make a > > newer Neturi module? > > At the moment I would fix the old module such that it behaves closely > to the newer RFC, and add comments to it where it differs. > > The main difference would be the handling of partial URLs/URIs. > For URLs, you must parse partial URLs with the parser that is also > adeqaute for the full URL, and you can only apply partial URLs relative > to full URLs of the same syntax. For example, if you have the partial > URL /x/y and have parsed it with HTTP syntax, you cannot apply this > URL to an FTP URL. In contrast to this, partial URIs are universal > in this respect, i.e. you can apply them to any kind of full URI. > > This is hard to change in Neturl, but I consider to add some > functions that make life easier (e.g. it is possible to change > the syntax of partial URLs as desired by the caller). Okay, this approach seems reasonable. It should be less confusing than what I suggested. > > I suggest that as functions become deprecated, we move them into a > > submodule called Deprecated. After a reasonable amount of time, functions > > are retired from the Deprecated module. This should allow the API to > > evolve cleanly over time with only minor disturbance for current users. > > So user must only open XY.Deprecated to get the old functions. > This would be ok. That's what I had in mind. Sounds good then. Patrick |
From: Jerome V. <vou...@pp...> - 2001-12-12 13:38:10
|
On Tue, Dec 11, 2001 at 11:36:19PM +0100, Gerd Stolpmann wrote: > - Is it true that it emulates continuations by ordinary closures? If so= , > I fear that the stack grows infinitely. Lwt.bind is tail-recursive when the input thread is not blocked: let rec bind x f =3D match x.state with Return v -> f v | ... Resuming a thread is not tail-recursive, though. I don't think this is a problem, because threads are usually resumed from the event loop. > - How do you cope with exceptions that affect more than one thread? > The typical example is HTTP in pipeline mode: Reading and writing the > HTTP stream are different threads of execution, but nevertheless coup= led > to a certain degree. For example, the reading thread may signal the > writing thread that it can now continue to write data (the "100 CONTI= NUE" > message). And if the reading thread gets an EOF, the writing thread > must immediately stop, too. There is no support for this at the moment. I think this can be dealt with by introducing a notion of "group" in Lwt_unix. A class "group" would contain a method for each blocking sytem call and a method "abort" which takes an exception and put the group into an abort state. When in this state, all the threads waiting for the completion of a blocking method are resumed and receive the exception. class group ... =3D object method read : Unix.file_descr -> string -> int -> int -> int Lwt.t method write : Unix.file_descr -> string -> int -> int -> int Lwt.t ... method abort : exn -> unit end > On 2001.12.10 21:22 Patrick M Doane wrote: > > There are also licensing issues with Lwt as it is > > packaged with Unison (and therefore GPL). This is not an issue. I can license it under the same license as Ocamlnet. > Maybe it is possible that both libraries can cooperate? My idea is that > one library as a whole is a thread under the control of the other libra= ry. > Maybe we need real (Posix or bytecode) threads to do it, but this would > not hurt if only one thread was active at a time. It should not be too hard to rewrite the Lwt_unix library on top of Equeue. -- J=E9r=F4me |
From: Gerd S. <in...@ge...> - 2001-12-12 13:05:48
|
Hi list, Guillaume aked me whether netchannels will support datagrams. This is not only a question of supporting UDP-style protocols, but every message-oriented protocol. I am thinking about a parameterized class: class ['a] out_obj_message_service : object method send : 'a -> unit method flush : unit -> unit method close_out : unit -> unit end class ['a] in_obj_message_service : object method receive : unit -> 'a (* raises End_of_file if no more message available *) method close_in : unit -> unit end For UDP, we could use 'a = string * sockaddr (the packet and the address of the peer). There are many stream-oriented protocols that send messages sequentially over the connection. For these 'a is the decoded message. What do you think about this idea? Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Gerd S. <in...@ge...> - 2001-12-12 12:36:18
|
On 2001.12.12 03:49 Patrick M Doane wrote: > Hi Gerd, > > If you're okay with some task delegation, I'll certainly help with these. Nothing against this. > Some specific comments below, as well as some initial tasks I could do. > > On Wed, 12 Dec 2001, Gerd Stolpmann wrote: > > > - Netchannels: > > > > * Support for Netbuffer (new method output_netbuffer) > > Given that the Netbuffer API allows direct access, this is purely a > convenience function: > > let output_netbuffer ch b = > ch # output (Netbuffer.unsafe_buffer b) 0 (Netbuffer.length b) > > Adding another method to the out_obj_channel type is one more that > must be implemented for all satisfying implementations. This is not an > objection to adding the method, but something to keep in mind. We could have a super class with convenience functions: class virtual out_obj_common_mixin = object(self) method virtual output : string -> int -> int -> unit method output_netbuffer b = self # output (Netbuffer.unsafe_buffer b) 0 (Netbuffer.length b) end and then inherit from this class. Convenience functions are important enough. Unfortunately, we cannot add printf (lack of polymorphic methods). > > * Netchannels.in_out_pair: is a buffer basing on Netbuffer. > > You can write into it and read from it. Reading from it > > when no data is available, but the write side is not yet > > closed ==> special exception Refill. (Something like EAGAIN.) > > This is essentially an in-process pipe except that the buffer size is > unbounded and read operations will not block. > > Do you see value in a Netchannel object that more closely models a pipe > for multi-threaded applications? The intention of in_out_pair is to help scanners (in conjunction with Netstream), i.e. you form a netstream on top of an in_out_pair. You can put data into the in_out_pair and check if it is already enough to recognize the next token. If it is not enough, the netstream will try to enlarge the lookahead window, and if it is not possible because the in_out_pair is empty, the Refill exception will be raised, indicating that it is not yet possible to get another token. I.e. it is useful in loops like let process data = (* Add the incoming data to the buffer: *) in_out_pair # output data; (* Scan as many tokens as possible: *) try while true do let token = ... (* Call a scanner that reads from in_out_pair *) ... (* do something with the token *) done with Refill -> () (* in_out_pair contains the beginning of a token, but not yet a complete token. Do nothing now. When [process] is called again with new data, we will check again if there is a new token *) The details of this technique are quite tricky, but such a feature is frequently needed. I think we do not need a close pipe object, because we can use real pipes. > > - Neturl: > > > > * Update to newer RFC (if possible?) > > We had some earlier discussions about this and, if I recall, had some > slightly different expectations. Since the newer RFC describes URIs > instead of URLs, would you like to keep the older module as-is and make a > newer Neturi module? At the moment I would fix the old module such that it behaves closely to the newer RFC, and add comments to it where it differs. The main difference would be the handling of partial URLs/URIs. For URLs, you must parse partial URLs with the parser that is also adeqaute for the full URL, and you can only apply partial URLs relative to full URLs of the same syntax. For example, if you have the partial URL /x/y and have parsed it with HTTP syntax, you cannot apply this URL to an FTP URL. In contrast to this, partial URIs are universal in this respect, i.e. you can apply them to any kind of full URI. This is hard to change in Neturl, but I consider to add some functions that make life easier (e.g. it is possible to change the syntax of partial URLs as desired by the caller). > I suggest that as functions become deprecated, we move them into a > submodule called Deprecated. After a reasonable amount of time, functions > are retired from the Deprecated module. This should allow the API to > evolve cleanly over time with only minor disturbance for current users. So user must only open XY.Deprecated to get the old functions. This would be ok. Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Patrick M D. <pa...@wa...> - 2001-12-12 02:49:56
|
Hi Gerd, If you're okay with some task delegation, I'll certainly help with these. Some specific comments below, as well as some initial tasks I could do. On Wed, 12 Dec 2001, Gerd Stolpmann wrote: > - Netchannels: > > * Support for Netbuffer (new method output_netbuffer) Given that the Netbuffer API allows direct access, this is purely a convenience function: let output_netbuffer ch b = ch # output (Netbuffer.unsafe_buffer b) 0 (Netbuffer.length b) Adding another method to the out_obj_channel type is one more that must be implemented for all satisfying implementations. This is not an objection to adding the method, but something to keep in mind. > * Netchannels.in_out_pair: is a buffer basing on Netbuffer. > You can write into it and read from it. Reading from it > when no data is available, but the write side is not yet > closed ==> special exception Refill. (Something like EAGAIN.) This is essentially an in-process pipe except that the buffer size is unbounded and read operations will not block. Do you see value in a Netchannel object that more closely models a pipe for multi-threaded applications? > * class input_descr: bases on Unix.read > > * class output_descr: bases on Unix.write > > * class socket_descr: is both input and output. close_in > does shutdown RECV, close_out does shutdown SEND, closing both > sides closes the descriptor. Good - I was just thinking about these the other day. I can implement them. > - Neturl: > > * Update to newer RFC (if possible?) We had some earlier discussions about this and, if I recall, had some slightly different expectations. Since the newer RFC describes URIs instead of URLs, would you like to keep the older module as-is and make a newer Neturi module? > > - Nethtml: > > * parse: reads from arbitrary netchannel > * output: writes into arbitrary netchannel > > * The old parse_* and write functions will become deprecated > > * Support for other encodings than only ISO-8859-1 I'm not familiar with character encoding issues, but could make these other changes. I suggest that as functions become deprecated, we move them into a submodule called Deprecated. After a reasonable amount of time, functions are retired from the Deprecated module. This should allow the API to evolve cleanly over time with only minor disturbance for current users. Patrick |
From: Gerd S. <in...@ge...> - 2001-12-12 00:50:59
|
Hi list, here is my list of open issues for the netstring subproject. The general theme is "use netchannels everywhere". Gerd ---------------------------------------------------------------------------- - Netstream: * Read from in_obj_channel, not only from string and in_channel * Write into enhanced in_obj_channel with look-ahead facility create_stream : in_obj_channel -> t --> The netstream reads automatically from the in_obj_channel - Netbuffer: * Shrink: Is it possible to use Obj.truncate? Currently, a new block is allocated. * add_inplace f: Calls n = f bufstring bufpos freebytes such that f can add inplace. Example: add_inplace (Pervasives.input file) - Netchannels: * Support for Netbuffer (new method output_netbuffer) * Netchannels.in_out_pair: is a buffer basing on Netbuffer. You can write into it and read from it. Reading from it when no data is available, but the write side is not yet closed ==> special exception Refill. (Something like EAGAIN.) * class input_descr: bases on Unix.read * class output_descr: bases on Unix.write * class socket_descr: is both input and output. close_in does shutdown RECV, close_out does shutdown SEND, closing both sides closes the descriptor. - Netencoding: * All encodings can be called on netchannels. For example: encode : Netstream.t -> Netchannels.in_obj_channel If I call let ch = encode stream ch#input will return the encoded contents of the passed stream. It must be possible to nest [encode] and [decode] functions. We need a netstream as input because [decode] usually needs to look ahead. Should work with Netchannels.in_out_pair such that one can encode/decode chunk by chunk. - Netconversion: There should be a Netchannels interface for [recode]: recode_in_channel : in_enc:encoding -> out_enc:encoding -> in_obj_channel -> in_obj_channel recode_out_channel : in_enc:encoding -> out_enc:encoding -> out_obj_channel -> out_obj_channel - Cgi: * Move dest_form_encoded_parameters* to Netencoding_cgi. This is a new module that handles encodings for CGI. (It is not possible to add these functions to Netencoding because of dependencies.) Implement dest_form_encoded_parameters_from_netstream such that it can cope with encoded parts. * The rest can go to netcgi * Of course, we keep Cgi for the moment, but the whole module is deprecated. - Neturl: * Update to newer RFC (if possible?) - Nethtml: * parse: reads from arbitrary netchannel * output: writes into arbitrary netchannel * The old parse_* and write functions will become deprecated * Support for other encodings than only ISO-8859-1 -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Gerd S. <in...@ge...> - 2001-12-11 22:37:07
|
On 2001.12.10 21:22 Patrick M Doane wrote: > Hi Guillaume, > > Netchannels should be very easy to use as they work just like regular > channels but have an object-oriented interface. This will be very useful > to easily support functionality like SSL in the future. > > I have not worked with Netstream and Netbuffer directly, and these were > also implemented before the Netchannel abstraction. It seems to me that > some of this functionality could be implemented by Netchannels but perhaps > Gerd could comment more directly on their use. Netbuffer is a module similar to Buffer but with some extensions that are often needed for stream-oriented network protocols: - It is possible to delete a range of bytes from the buffer. However, this does not reduce the amount of allocated memory - If it is really necessary, it is also possible to physically shrink a buffer, deallocating unused memory (only a limited set of bucket sizes is used) - Netbuffers use an optimized memory allocation scheme that reduces the load on the garbage collector - You can access the underlying string directly, bypassing the safe interface. The intention is to avoid copies of this string if one only compares this string with a pattern. For example, let us assume a network protocol where the string "ABC" is used as delimiter between messages. A possible implementation to find the first message: - Create a Netbuffer with a fixed size N, N >= 3 (e.g. N=4096) - Do Unix.read, fill the Netbuffer - Check if there is "ABC". If yes: exit - Delete the first L-2 bytes of the Netbuffer (L=buffer length) (So if the last bytes of the buffer are "AB" this beginning of the delimiter remains in the buffer.) - Unix.read again The point is that you don't need memory allocations inside this loop. You can simply see Netbuffer as our own version of Buffer. Netstream is an abstraction on top of Netbuffer. A "window" can be moved along a stream of incoming bytes; the window focuses on a certain range of bytes in the stream. The window can be increased to arbitrary length. This can also be used to split the stream up into several messages that are delimited by a certain byte sequence. The "window" is the beginning of the message, and it is increased until the delimiter is within the window. The Netstream module will eventually be changed such that it also accepts Netchannels as input. However, I am not sure about the output interface. Perhaps it will be an out_obj_channel with extensions to access the window that is interpreted as "look-ahead" window. Anyway, future changes of interfaces will be "conservative", keeping the old interface for a certain (but of course limited) period of time. > We have one important design decision to make soon that may have a larger > impact - how to do multiplex I/O operations without threads? There seems > to be agreement that we need to architect the protocols to work > efficiently without requiring threads. > > There are two general approaches to abstracting around Unix.select: > > - Gerd has developed the Equeue package for maintaing event queues. This > is also somewhat like a Python library called Medusa. > > - Jerome has implement a Light-weight thread (Lwt) package for Unison > that makes the transition from a blocking to a non-blocking program almost > trivial. I must admit that I don't understand Jerome's library fully. The examples look nice, but I have a number of questions: - Is it true that it emulates continuations by ordinary closures? If so, I fear that the stack grows infinitely. - How do you cope with exceptions that affect more than one thread? The typical example is HTTP in pipeline mode: Reading and writing the HTTP stream are different threads of execution, but nevertheless coupled to a certain degree. For example, the reading thread may signal the writing thread that it can now continue to write data (the "100 CONTINUE" message). And if the reading thread gets an EOF, the writing thread must immediately stop, too. > I have used both of these libraries but not extensively enough to know > which would be a better approach. I think I currently favor the Lwt > approach, but I'm also fairly comfortable working with monads from some > Haskell programming. There are also licensing issues with Lwt as it is > packaged with Unison (and therefore GPL). > > Since Jerome mentioned that Lwt will be made a library at some point, I > expect that this will be available with LGPL as with other code he has > released. This should be sufficient for our needs, although a non-LGPL > solution would certainly be nicer. > > Any other comments on an overall approach that we should take? I > believe this is the last major plumbing effort that needs to be > resolved. It would be nice to have only one such library. However, we might want to have both approaches, because there are good examples for both. Maybe it is possible that both libraries can cooperate? My idea is that one library as a whole is a thread under the control of the other library. Maybe we need real (Posix or bytecode) threads to do it, but this would not hurt if only one thread was active at a time. Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Gerd S. <in...@ge...> - 2001-12-11 12:42:41
|
On 2001.12.10 18:40 Chris Tilt wrote: > Gerd, > > Thanks for your great developments here. I was able to build > and configure all of your work, however since I am new to the > project, I have a simple question. > > What URL on a local server will engage the test/jserv cgi? If > perhaps you could share an example URL, such as > http://localhost/cgi-bin/jserv.jsp The ApJServMount directive in the httpd.conf file determines where the jserv instance becomes visible. For example, ApJServMount /servlets /root mounts the zone "root" at /servlets (full URL: http://localhost/servlets/XXX). The zone is just another parameter of the servlet, and it is perfectly ok to have only the root zone. Once mounted, the servlet gets ALL requests under that mount point. It is not possible to share /cgi-bin and /servlets, but I think it is possible to use mod_rewrite to rewrite /cgi-bin URLs to /servlets URLs within httpd (so it's invisible to the user). I have not yet tried it, though. > Even for the standard Ocaml CGI scripts, I do not know by > what name to invoke them. (The following explanations are for Apache:) That depends on your server configuration. You can either declare a /cgi-bin directory, e.g. (httpd.conf:) ScriptAlias /cgi-bin/ "/usr/local/httpd/cgi-bin/" <Directory /usr/local/httpd/cgi-bin/> Options +ExecCGI # maybe you need also: Order Allow,Deny Allow from all </Directory> and put your CGI as executable into this directory, e.g. /usr/local/httpd/cgi-bin/test.cgi is visible as http://localhost/cgi-bin/test.cgi. Another possibility is to bind the suffix .cgi to CGI execution (httpd.conf:) AddHandler cgi-script .cgi Now the CGIs can occur everywhere in the document tree. At least for development, this way is better because you can configure to have the CGIs in your home directory (httpd.conf:) UserDir public_html <Directory /home/*/public_html> Options +FollowSymLinks +ExecCGI +Indexes AllowOverride None Order Allow,Deny Allow from all </Directory> The CGIs must be in /home/USER/public_html/ and are visible under http://localhost/~USER/ Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Patrick M D. <pa...@wa...> - 2001-12-10 20:23:10
|
Hi Guillaume, Netchannels should be very easy to use as they work just like regular channels but have an object-oriented interface. This will be very useful to easily support functionality like SSL in the future. I have not worked with Netstream and Netbuffer directly, and these were also implemented before the Netchannel abstraction. It seems to me that some of this functionality could be implemented by Netchannels but perhaps Gerd could comment more directly on their use. We have one important design decision to make soon that may have a larger impact - how to do multiplex I/O operations without threads? There seems to be agreement that we need to architect the protocols to work efficiently without requiring threads. There are two general approaches to abstracting around Unix.select: - Gerd has developed the Equeue package for maintaing event queues. This is also somewhat like a Python library called Medusa. - Jerome has implement a Light-weight thread (Lwt) package for Unison that makes the transition from a blocking to a non-blocking program almost trivial. I have used both of these libraries but not extensively enough to know which would be a better approach. I think I currently favor the Lwt approach, but I'm also fairly comfortable working with monads from some Haskell programming. There are also licensing issues with Lwt as it is packaged with Unison (and therefore GPL). Since Jerome mentioned that Lwt will be made a library at some point, I expect that this will be available with LGPL as with other code he has released. This should be sufficient for our needs, although a non-LGPL solution would certainly be nicer. Any other comments on an overall approach that we should take? I believe this is the last major plumbing effort that needs to be resolved. Patrick On Mon, 10 Dec 2001, Guillaume Valadon wrote: > hi, > > My work on the dns protocol is getting well, but it will be great to add > the same interface as the others protocols. > The main problem is i don't know ocamlnet ... > > Is there any docs or samples of netchannels, netstream, and netbuffers ? > > I tried to code an host command, and it's working really well. > Next step is a tiny dns server :*) > > bye, > guillaume > -- > mailto:gui...@va... > ICQ uin : 1752110 > > Page ouebe : http://guillaume.valadon.net > > "Ocaml c'est bon ! Mangez en !" - moi :*) > > _______________________________________________ > Ocamlnet-devel mailing list > Oca...@li... > https://lists.sourceforge.net/lists/listinfo/ocamlnet-devel > |
From: Guillaume V. <gui...@va...> - 2001-12-10 20:02:51
|
hi, My work on the dns protocol is getting well, but it will be great to add the same interface as the others protocols. The main problem is i don't know ocamlnet ... Is there any docs or samples of netchannels, netstream, and netbuffers ? I tried to code an host command, and it's working really well. Next step is a tiny dns server :*) bye, guillaume -- mailto:gui...@va... ICQ uin : 1752110 Page ouebe : http://guillaume.valadon.net "Ocaml c'est bon ! Mangez en !" - moi :*) |
From: Chris T. <ce...@we...> - 2001-12-10 17:39:09
|
Gerd, Thanks for your great developments here. I was able to build and configure all of your work, however since I am new to the project, I have a simple question. What URL on a local server will engage the test/jserv cgi? If perhaps you could share an example URL, such as http://localhost/cgi-bin/jserv.jsp Even for the standard Ocaml CGI scripts, I do not know by what name to invoke them. Thankyou for the experiements and answers to these beginner's questions. -Chris -----Original Message----- From: Gerd Stolpmann [mailto:in...@ge...] Sent: Sunday, December 09, 2001 1:01 PM To: ocamlnet-devel Cc: bedouin-devel Subject: [Ocamlnet-devel] jserv (again) Hi list, there is now also a test program for multi-threaded servers. I am currently using a pool of 10 workers that can carry out requests; if there are more requests the client must wait until there is a free worker. I have also done some performance tests with httperf. The server was a dual Pentium III (550 Mhz, 640MB RAM). httpd and jservd were running on the same server. I used one to three clients, connected with 100Mbit switched Ethernet. One client is able to do 165 connections per second (each with one HTTP request). Three clients can be served with 200 to 300 connections per second. This becomes difficult to measure as the error rates increase the more frequently the clients connect (many timeouts, sometimes out of free file descriptors). I did not check pipelined connections (i.e. more than one HTTP request per connection). The test script produces a "Hello World" page and prints all the passed CGI parameters (page size = 1400 bytes). Authentication was turned off (seems to cost 20 to 30 percent of the time). Of course, the test program was compiled with ocamlopt (and pthreads). Under such high loads there occurred sometimes strange things, e.g. "bad file descriptors", and "interrupted system calls". I don't know yet what the cause of these problems is. Anyway, we have now a high-performance architecture for dynamic web contents. Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- _______________________________________________ Ocamlnet-devel mailing list Oca...@li... https://lists.sourceforge.net/lists/listinfo/ocamlnet-devel |
From: Gerd S. <in...@ge...> - 2001-12-09 21:01:27
|
Hi list, there is now also a test program for multi-threaded servers. I am currently using a pool of 10 workers that can carry out requests; if there are more requests the client must wait until there is a free worker. I have also done some performance tests with httperf. The server was a dual Pentium III (550 Mhz, 640MB RAM). httpd and jservd were running on the same server. I used one to three clients, connected with 100Mbit switched Ethernet. One client is able to do 165 connections per second (each with one HTTP request). Three clients can be served with 200 to 300 connections per second. This becomes difficult to measure as the error rates increase the more frequently the clients connect (many timeouts, sometimes out of free file descriptors). I did not check pipelined connections (i.e. more than one HTTP request per connection). The test script produces a "Hello World" page and prints all the passed CGI parameters (page size = 1400 bytes). Authentication was turned off (seems to cost 20 to 30 percent of the time). Of course, the test program was compiled with ocamlopt (and pthreads). Under such high loads there occurred sometimes strange things, e.g. "bad file descriptors", and "interrupted system calls". I don't know yet what the cause of these problems is. Anyway, we have now a high-performance architecture for dynamic web contents. Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Gerd S. <in...@ge...> - 2001-12-09 02:41:39
|
Hello list, I had today some fun with the jserv protocol, and added an experimental implementation to the ocamlnet CVS repository (directory src/cgi). As this is also interesting for the bedouin project I cc this mail to the bedouin list. The jserv protocol is very similar to CGI, but it does not start a new process for every HTTP request. The request is encoded and forwarded to a server process that handles it. The server process can be started manually or on demand by the web server. As a picture: BROWSER <----HTTP----> WEBSERVER <----JSERV----> JSERVD There is a relatively old module for Apache (mod_jserv) implementing the jserv client. You can get it from http://java.apache.org/jserv. Note that the Java Apache project is dead, but the jserv protocol is also used by Jakarta. Tomcat contains the module mod_jk that supports the same jserv protocol version (1.2) as the original implementation but with a different configuration file. I am currently using mod_jserv for my experiments; it is smaller and easier to extract from the whole distribution. I have written a server for jserv in O'Caml that uses the existing netcgi infrastructure. This means that it is simple to convert an existing CGI script into a jserv server, it is only necessary to change the main program. (Well, there may arise problems with multi-threaded servers, but the point is that the netcgi data structures can be used for jserv, too.) Currently, the servers are single-threaded (i.e. they process the requests one after another) and seem to work well, and I am going to study multi-threaded servers soon. A typical CGI script is: let cgi = new std_activation() in ... (* analyze request *) cgi # set_header ... (); cgi # output # output_string ...; cgi # output # commit_work() The corresponding jserv program is: let onconnect srv = Netcgi_jserv_ajp12.serve_connection (fun zone servlet env -> let cgi = new std_activation ~env () in ... (* analyze request *) cgi # set_header ... (); cgi # output # output_string ...; cgi # output # commit_work() ) in jvm_emu_main (fun props auth addr port -> server onconnect auth addr port);; As you can see, the request handler (once the cgi object is available) is the same. Some explanations: - jvm_emu_main: This is the main program. It emulates the JVM command signature which is necessary because the jservd is started from mod_jserv, and mod_jserv assumes it starts a JVM. The important thing is that the second command argument is the name of the property file to read in ( ==> props). - In the property file the address and port of the server socket is configured, and optionally the authentication method ( ==> auth, addr, port) - server: This function creates the server socket and accepts new connections. For every new connection onconnect is called. - onconnect: This definition of onconnect uses simply the server_connection function of the implementation for the version 1.2 of the jserv protocol. onconnect will become more complicated for multi-threaded servers because its task would be to start a new thread, register it, and call server_connection from the new thread (and some more stuff). - The request handler gets three arguments: zone, servlet, env. The zone can be used to distinguish between production and development servers. The servlet string is derived from the request URI by the web server; it can be used to distinguish between several service instances. env is a normal cgi_environment. There is a small example (src/cgi/tests/jserv.ml) that demonstrates how everything works. I have also included the jserv-specific lines of the httpd.conf file, and the jserv.properties file as comment in jserv.ml. If you don't have already mod_jserv.so for your Apache: You need http://java.apache.org/jserv/dist/ApacheJServ-1.1.2.tar.gz. You don't need the JSDK classes (but a Java compiler; only to satisfy the configure script). Do: mkdir -p jsdk/javax/servlet touch jsdk/javax/servlet/Servlet.class This way the configure script thinks you have the JSDK classes. ./configure --with-apxs=/usr/sbin/apxs --prefix=/tmp/jserv --with-JSDK=jsdk (Maybe you need --enable-EAPI, too) Then do cd src/c make The resulting mod_jserv.so was in src/c/.libs (i.e. where libtool stores libraries before installation). If you have already mod_jk.so for your Apache: I don't know if it works. Eventually it is necessary to change the names of the properties in jvm_emu_main. httpd.conf is probably different, too. Furthermore, mod_jk seems to pass fewer variables (e.g. SCRIPT_NAME is omitted). I had a quick look at the source code, so I know that things are slightly different. I haven't yet measured how fast the jserv architecture is. I expect numbers from 100 to 300 requests per second. Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Patrick M D. <pa...@wa...> - 2001-12-07 04:02:33
|
On Fri, 7 Dec 2001, Gerd Stolpmann wrote: > Yes, it's a stupid bug. tbody is not declared in the simplified DTD. > And non- declared elements may occur everywhere, so <tbody> is allowed > to occur inside <td>. You know, I looked earlier to see if it was in that list and I *thought* I saw it there. Thanks for catching that :) > I have added the missing declaration > "tbody", (`None, `Elements ["tr"]) > i.e. <tbody> does not belong to any element class and may only contain <tr> > elements. The example is parsed correctly with this modification of the DTD. Great, that's good to hear. > I was a little bit surprised when I read the message first because nethtml > does implement minimization as SGML suggests. Normally the complaint is that > nethtml is _too_ compliant, because it does not interpret HTML as one of the > widely-used browsers. I don't have much sympathy for the argument that "it looks fine in my browser", but it seems that Nethtml has a good balance between the two DTD representations. If there is a desire to have a looser interpretation of HTML, I would probably start by looking more at Tidy. > There are currently two "flavours" of DTDs: > - html40_dtd: corresponds to the (transitional) HTML4.0 DTD with the addition > that unknown elements may occur everywhere > - relaxed_html40_dtd: corresponds to common HTML practice prior to the strict > interpretation > > Maybe we want a third flavour: strict_html40_dtd containing only the > elements of the strict DTD without the possibility of using other > elements. The latter would be a validation error. This seems a good idea, although I would not have much use for it yet. Until CSS support is improved, I find working with the transitional DTD to work much better. > Furthermore, I can imagine that there should be a mode reporting all > parser errors, for example, if a tag cannot be interpreted at all. > > Maybe we want also attribute validation which is currently left out. What is the current behavior when a document is found to be invalid? Sorry if this is a silly question, I couldn't easily tell from the source and I didn't see mention of it in the .mli file. Also, it's probably good to establish some expectation of the scope for Nethtml. Should it strive for increased SGML compliance or try to follow what is done in practice? Do we want to have more support for validation? I would expect that many users of Nethtml simply want an HTML parser that doesn't get in their face about validation errors. On the other hand, I'd certainly be interested in adding more validation checks to some of my tools that are using Nethml now. |
From: Gerd S. <in...@ge...> - 2001-12-07 01:02:40
|
Hi, On 2001.12.07 00:53 Patrick M Doane wrote: > I have been using the HTML parser in Netstring along with a hacked version > of the PXP datamodel to write HTML transormations. This has been working > well for us until it ran on some data that was heavily minimized. > > The data input looked like this: > > <table> > <thead> > <tr> > <td>head 1 > <td>head 2 > <tbody> > <tr> > <td>line 1 > <td>line 2 > </table> > > The Nethtml parser is not trying to be a fully compliant parser, but > having one would be useful for the applications that I've been working on > lately. Including an SGML validator as part of that would also be very > useful. > > Any thoughts? Yes, it's a stupid bug. tbody is not declared in the simplified DTD. And non- declared elements may occur everywhere, so <tbody> is allowed to occur inside <td>. I have added the missing declaration "tbody", (`None, `Elements ["tr"]) i.e. <tbody> does not belong to any element class and may only contain <tr> elements. The example is parsed correctly with this modification of the DTD. I was a little bit surprised when I read the message first because nethtml does implement minimization as SGML suggests. Normally the complaint is that nethtml is _too_ compliant, because it does not interpret HTML as one of the widely-used browsers. A typical example is <table> <tr> <td> <table> <tr> <td>x</td></td> </tr> </table> y </td> </tr> </table> which has one extra </td>. By default, nethtml interprets it as <table> <tr> <td> <table> <tr> <td>x</td> </td> </tr> </table> y <!-- dropped: --> </td> </tr> </table> This case could be solved by introducing the relaxed_html_40_dtd containing further constraints that are more compatible with existing browsers (and probably existing HTML code). There are currently two "flavours" of DTDs: - html40_dtd: corresponds to the (transitional) HTML4.0 DTD with the addition that unknown elements may occur everywhere - relaxed_html40_dtd: corresponds to common HTML practice prior to the strict interpretation Maybe we want a third flavour: strict_html40_dtd containing only the elements of the strict DTD without the possibility of using other elements. The latter would be a validation error. Furthermore, I can imagine that there should be a mode reporting all parser errors, for example, if a tag cannot be interpreted at all. Maybe we want also attribute validation which is currently left out. Gerd -- ---------------------------------------------------------------------------- Gerd Stolpmann Telefon: +49 6151 997705 (privat) Viktoriastr. 45 64293 Darmstadt EMail: ge...@ge... Germany ---------------------------------------------------------------------------- |
From: Patrick M D. <pa...@wa...> - 2001-12-06 23:53:54
|
I have been using the HTML parser in Netstring along with a hacked version of the PXP datamodel to write HTML transormations. This has been working well for us until it ran on some data that was heavily minimized. The data input looked like this: <table> <thead> <tr> <td>head 1 <td>head 2 <tbody> <tr> <td>line 1 <td>line 2 </table> The Nethtml parser is not trying to be a fully compliant parser, but having one would be useful for the applications that I've been working on lately. Including an SGML validator as part of that would also be very useful. Any thoughts? Patrick |
From: Patrick M D. <pa...@wa...> - 2001-11-26 14:40:31
|
Hi Jerome, I just made a first pass looking over the library and it looks great. On Mon, 26 Nov 2001, Jerome Vouillon wrote: > With this library, your function xact would have type > 'request -> 'response Lwt.t > and you would use it like this (the operator ">>=" is a synonymous for > "Lwt.bind"): > > xact' "req1" >>= (fun rsp1 -> > xact' "req2" >>= (fun rsp2 -> > ... > )) We might benefit from a camlp4 syntax extension similar to what Haskell has done. Maybe a new keyword 'bind' like this: bind rsp1 = xact' "req1" in bind rsp2 = xact' "req2" in .. The transformation is certainly trivial. > At the moment, the library is only available in the developer's > version of Unison (http://www.cis.upenn.edu/~bcpierce/unison/download.html), > but I plan to distribute it separately eventually. A separate distribution would indeed be useful. I'll try writing some real code in it and see how that works out. Patrick |
From: Jerome V. <vou...@pp...> - 2001-11-26 11:01:32
|
On Sun, Nov 25, 2001 at 02:28:34PM -0500, Patrick M Doane wrote: > I notice that the type for xact' is very similar to the monadic bind > operator. Maybe it would be useful to write a camlp4 extension similar > to the syntactic sugar in Haskell for monads? You may be interested by the cooperative thread library that I wrote for Unison: it relies on this very idea. It is composed of two modules. The module Lwt defines threads in a rather abstract way: type 'a t (* The type of threads returning a result of type ['a]. *) val return : 'a -> 'a t (* [return e] is a thread whose return value is the value of the expression [e]. *) val bind : 'a t -> ('a -> 'b t) -> 'b t (* [bind t f] is a thread which first waits for the thread [t] to terminate and then, if the thread succeeds, behaves as the application of function [f] to the return value of [t]. If the thread [t] fails, [bind t f] also fails, with the same exception. *) [...] The module Lwt_unix provides thread-compatible system calls. For instance: val read : Unix.file_descr -> string -> int -> int -> int Lwt.t val write : Unix.file_descr -> string -> int -> int -> int Lwt.t With this library, your function xact would have type 'request -> 'response Lwt.t and you would use it like this (the operator ">>=" is a synonymous for "Lwt.bind"): xact' "req1" >>= (fun rsp1 -> xact' "req2" >>= (fun rsp2 -> ... )) Also, it would be possible to define a "non-threaded" implementation of the library by: type 'a t = 'a let return x = x let bind x f = f x [...] let read = Unix.read [...] At the moment, the library is only available in the developer's version of Unison (http://www.cis.upenn.edu/~bcpierce/unison/download.html), but I plan to distribute it separately eventually. -- Jerome |
From: Patrick M D. <pa...@wa...> - 2001-11-25 19:32:41
|
Hi Guillaume, This sounds great - I'll be interested to see the work. I would like very much to try and develop a consistency between the various protocol interfaces. If you are interested in discussing this, please send information about the design to the list here. Also, if you would like to use the CVS server for development, the prototype directory is a sandbox that can make it easier for developers to look over code that is still in design. Patrick On Sun, 25 Nov 2001, Guillaume Valadon wrote: > hi, > > I have a project to do in ocaml, it's about the DNS protocol. > I made something dirty in order to learn the protocol, and it works. > So it can be done in a clean way. > > I like the way they did in the Net::DNS module of Perl but i'd like to > know other opinions. > > It will look like this: > let query = new dns_query "sieste.org A" in > query#send (); > print_string (query#answer_to_string ());; > > Not only queries will be support but response too. So it could be possible > to built a tiny DNS server. > > The work will be finish in mid-january. > > bye, > guillaume > -- > mailto:gui...@va... > ICQ uin : 1752110 > > Page ouebe : http://guillaume.valadon.net > > "No! Try not. Do. Or do not. There is no try." - Yoda > > _______________________________________________ > Ocamlnet-devel mailing list > Oca...@li... > https://lists.sourceforge.net/lists/listinfo/ocamlnet-devel > |