You can subscribe to this list here.
2003 |
Jan
|
Feb
(81) |
Mar
(97) |
Apr
(88) |
May
(80) |
Jun
(170) |
Jul
(9) |
Aug
|
Sep
(18) |
Oct
(58) |
Nov
(19) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(22) |
Feb
(9) |
Mar
(28) |
Apr
(164) |
May
(186) |
Jun
(101) |
Jul
(143) |
Aug
(387) |
Sep
(69) |
Oct
(14) |
Nov
(8) |
Dec
(99) |
2005 |
Jan
(10) |
Feb
(34) |
Mar
(24) |
Apr
(7) |
May
(41) |
Jun
(20) |
Jul
(3) |
Aug
(23) |
Sep
(2) |
Oct
(26) |
Nov
(41) |
Dec
(7) |
2006 |
Jan
(6) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
(5) |
Jul
(8) |
Aug
(20) |
Sep
|
Oct
(6) |
Nov
(5) |
Dec
|
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
(3) |
May
(2) |
Jun
|
Jul
|
Aug
(1) |
Sep
(7) |
Oct
(6) |
Nov
(19) |
Dec
(11) |
2008 |
Jan
|
Feb
(7) |
Mar
(9) |
Apr
(21) |
May
(42) |
Jun
(27) |
Jul
(28) |
Aug
(26) |
Sep
(16) |
Oct
(32) |
Nov
(49) |
Dec
(65) |
2009 |
Jan
(35) |
Feb
(20) |
Mar
(36) |
Apr
(42) |
May
(111) |
Jun
(99) |
Jul
(70) |
Aug
(25) |
Sep
(15) |
Oct
(29) |
Nov
(3) |
Dec
(18) |
2010 |
Jan
(10) |
Feb
(4) |
Mar
(57) |
Apr
(63) |
May
(71) |
Jun
(64) |
Jul
(30) |
Aug
(49) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
(3) |
2011 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2024 |
Jan
(1) |
Feb
(3) |
Mar
(6) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Florent M. <fmo...@li...> - 2008-06-10 16:03:32
|
... > > Hi, > > this hairy version is as fast as the one with the external oneliner C, > > here are the elapsed times (from Sys.time) for 3_000_000 floats written > > to /dev/null: > > > > 0.51 sec. -- with extern oneliner C > > 0.50 sec. -- the hairy one wins, well done! > > That's to be expected. Obj.magic is just the %identity compiler > primitive, which the compiler optimizes away completely, so the only > overhead is (maybe) the function call and (maybe) the boxing up of the > float argument. yes Obj.magic / %identity just allow to go through the type inference, but do not produce any assembly. the one with the extern C should be much slower because there is a call to memcpy (while there's no copy in the hairy one), and moreover there is a C function call that cannot be inlined. why the difference is so thin, I think, is that the hairy one makes 8 char outputs, while the C oneliner makes a single string output. I don't see any other reason why it's so close, this point seems to balance. ... So, I've just tryed to hack it to make a single string output, the hack is to use an unsafe_output just as the previous hairy one used String.unsafe_get. The function unsafe_output is defined in Pervasives, but is not exposed in the .mli file ; however it is still possible to use it from any file if one copy its definition: external unsafe_output: out_channel -> string -> int -> int -> unit = "caml_ml_output" this one avoids the call to string_length that will crash with the fake string there's a significant gain: 0.50 sec. -- rich's hairy one 0.35 sec. -- with the single unsafe ouput 1.34 sec. -- (for recall) the one from current extLib ___________________________ to make it included in the extLib.IO I have added the unsafe output in the type "output": type 'a output = { mutable out_write : char -> unit; mutable out_output : string -> int -> int -> int; mutable out_output_unsafe : string -> int -> int -> int; mutable out_close : unit -> 'a; mutable out_flush : unit -> unit; } let output_channel ch = { out_write = (fun c -> output_char ch c); out_output = (fun s p l -> Pervasives.output ch s p l; l); out_output_unsafe = (fun s p l -> unsafe_output ch s p l; l); out_close = (fun () -> Pervasives.close_out ch); out_flush = (fun () -> Pervasives.flush ch); } (* in other places I've just copy-pasted out_output to out_output_unsafe *) let output_unsafe o s p l = o.out_output_unsafe s p l let write_double_once ch (d : float) = ignore(output_unsafe ch (Obj.magic d : string) 0 8) maybe there could be a more elegant way... -- With Regards Florent |
From: Richard J. <ri...@an...> - 2008-06-10 08:48:54
|
On Sun, Jun 08, 2008 at 09:39:39PM +0200, Florent Monnier wrote: > > In fact, thinking about this a bit more clearly, there's no need to > > set the tag at all if all you do is call 'String.unsafe_get', so this > > works: > > > > let hairy_string_of_float (d : float) = > > (Obj.magic d : string) > > > > Be really careful that you don't let the returned "string" escape into > > the wild though :-) It still looks like a double to the garbage > > collector and it's not a well-formed string either, so any function > > other than 'String.unsafe_get' is highly likely to fail. > > Hi, > this hairy version is as fast as the one with the external oneliner C, > here are the elapsed times (from Sys.time) for 3_000_000 floats written > to /dev/null: > > 0.51 sec. -- with extern oneliner C > 0.50 sec. -- the hairy one wins, well done! That's to be expected. Obj.magic is just the %identity compiler primitive, which the compiler optimizes away completely, so the only overhead is (maybe) the function call and (maybe) the boxing up of the float argument. Rich. -- Richard Jones Red Hat |
From: Florent M. <fmo...@li...> - 2008-06-08 19:30:09
|
> In fact, thinking about this a bit more clearly, there's no need to > set the tag at all if all you do is call 'String.unsafe_get', so this > works: > > let hairy_string_of_float (d : float) = > (Obj.magic d : string) > > Be really careful that you don't let the returned "string" escape into > the wild though :-) It still looks like a double to the garbage > collector and it's not a well-formed string either, so any function > other than 'String.unsafe_get' is highly likely to fail. Hi, this hairy version is as fast as the one with the external oneliner C, here are the elapsed times (from Sys.time) for 3_000_000 floats written to /dev/null: 0.51 sec. -- with extern oneliner C 0.50 sec. -- the hairy one wins, well done! -- Cheers Florent |
From: Richard J. <ri...@an...> - 2008-06-08 09:32:25
|
On Sun, Jun 08, 2008 at 10:17:59AM +0100, rich wrote: > let hairy_string_of_float (d : float) = > let r = Obj.repr d in > Obj.set_tag r Obj.string_tag; > (Obj.obj r : string) In fact, thinking about this a bit more clearly, there's no need to set the tag at all if all you do is call 'String.unsafe_get', so this works: let hairy_string_of_float (d : float) = (Obj.magic d : string) Be really careful that you don't let the returned "string" escape into the wild though :-) It still looks like a double to the garbage collector and it's not a well-formed string either, so any function other than 'String.unsafe_get' is highly likely to fail. Rich. -- Richard Jones Red Hat |
From: Richard J. <ri...@an...> - 2008-06-08 09:18:11
|
On Mon, Jun 02, 2008 at 08:58:14PM +0200, Florent Monnier wrote: > > > Have you decided to make the extlib with only ocaml code, > > > without any external C ? > > > > That's the policy - to make it easier to port extlib to any platform > > which supports OCaml. > > Maybe write_double_opt could be joined along with the extlib only as a patch > and a readme explaining the difference. It's maybe possible to have optimized versions of extlib functions which are only switched in if they can be compiled, with OCaml versions as back ups. I'll see what others think about that though. > > Is it possible to match the speed-ups using pure OCaml code? eg. by > > carefully looking at the generated assembler (ocamlopt -S) and > > studying why it might be slow? > > yes I have read the gas code of the extlib version compared to the > one of the mixed ocaml/C, and even without this just reading the > original code it is easy to understand what makes the difference: This is the code we're discussing: CAMLprim value double_cast( value str, value d ) { memcpy((char *)str, (double *)d, sizeof(double)); return Val_unit; } external double_cast: buf_str:string -> float -> unit = "double_cast" let buf_str = "01234567" let write_double_opt_native ch d = double_cast ~buf_str d; nwrite ch buf_str As was pointed out in another reply, I don't think this is thread safe. Anyhow you can get the same effect in pure OCaml with this hairy bit of Obj magic: open Printf let hairy_string_of_float (d : float) = let r = Obj.repr d in Obj.set_tag r Obj.string_tag; (Obj.obj r : string) let print_bytes s n = for i = 0 to n-1 do printf "%02x " (Char.code (String.unsafe_get s i)) done; printf "\n" let () = print_bytes (hairy_string_of_float 1.0) 8; print_bytes (hairy_string_of_float 3.0) 8; print_bytes (hairy_string_of_float 1e38) 8; Output: 00 00 00 00 00 00 f0 3f 00 00 00 00 00 00 08 40 b1 a1 16 2a d3 ce d2 47 Note that the string returned from hairy_string_of_float isn't a well-formed OCaml string, so it's not safe to call anything except String.unsafe_get on it. eg. functions such as String.length will definitely fail. I haven't tested the performance, but I did look at the assembly code. On my x86-64 it's unfortunate that the compiler didn't inline the call to Obj.set_tag (instead it's a C call, even though the C function is a two-liner). You can probably replace it with a call to String.unsafe_set with a negative offset to modify the tag directly, and with luck the generated code should be faster than your C impl. Rich. -- Richard Jones Red Hat |
From: Florent M. <fmo...@li...> - 2008-06-03 18:29:38
|
> with this change, I still get roughly 2 million ops (only slightly faster). > This would lead me to believe that time is not spent in doing ALU ops but > rather time is spent either in the garbage collector or data cache misses. Have you tryed my (write_double_opt) version in your benchs to compare? How much was the difference? and if time is spent more in allocs and GC, this explains too why (write_double_opt) is 3 times faster: it doesn't alloc anything. > Isn't there a way to perform the same double_cast using some GC/Ocaml > object structure magic like the Obj module? I haven't found any ocaml function equivalent to the C function memcpy() even in the Obj module. > How fast does a write_double have to be? Yes I admit happily that this is a geekish issue :) but we are geeks and love trying to optimise code ;-) Well more seriously if I had found a speedup of only 5% or 10% I wouldn't have bother you. If I decided to write it's because an enhancement of close to 3 times faster was not that bad. But with a (String.create) to make it thread safe, the enhancement is only about 2.5 times faster. -- With Regards Florent |
From: Kourkoulos <ral...@Eo...> - 2008-06-03 08:54:46
|
Now you can get big real fast at an affordable price http://www.plimxate.com/ |
From: Janne H. <jjh...@gm...> - 2008-06-02 23:37:45
|
> If IO module is used to write out to a file, it would sound like the > overhead of writing to output would far outweigh benefits of tighter > assembly code for writing out doubles. Wouldn't calling write_byte eight > times be much more expensive than the few shift instructions? > Looks like the situation is more complicated than what I said above. I wrote this simple benchmark: let () = let io_out = IO.output_string () in for i = 0 to 65535 do for j = 0 to 127 do (* j upper bound needs to be 16 when writing out 8 bytes per double! *) IO.write_double io_out (float_of_int i) done; done which just writes floats to a string. I verified via .s files that IO.write_double actually gets called and doesn't get inlined. If I implement write_i64 (which write_double uses) as a dummy function that does nothing: let write_i64 ch i = () the program can write out 14.5 million write_doubles / sec. On my 1.8GHz laptop this means roughly 125 cycles per write_double. Although not exactly blazingly fast for almost a non-op, it doesn't feel completely off given that the program does a lot of conversions from ints to floats and allocates memory per each alloc (see assembly right below): .L103: call caml_alloc2 .L107: leal 4(%eax), %ebx movl $2301, -4(%ebx) movl 4(%esp), %eax sarl $1, %eax pushl %eax fildl (%esp) addl $4, %esp fstpl (%ebx) movl 8(%esp), %eax call camlIO__write_double_327 .L108: movl 12(%esp), %eax movl %eax, %ebx addl $2, %eax movl %eax, 12(%esp) cmpl $255, %ebx jne .L103 Now if I go and modify the write_i64 to be only slightly more complex: let write_i64 ch i = let ilo = Int64.to_int32 i in () the write_double rate drops to 9.3 million ops / sec. Slightly complicating it again like so: let write_i64 ch i = let ilo = Int64.to_int32 i in let ihi = Int64.to_int32 (Int64.shift_right_logical i 32) in () the performance almost halves, now at 5.3 million ops / sec. I'd tend to believe that this performance drop is due to more allocation, as both ilo and ihi variables would be of type Int32.t and hence boxed 32-bit ints. If I implement the write_i64 function with something that actually does something: let write_i64 ch i = let ilo = Int64.to_int32 i in let ihi = Int64.to_int32 (Int64.shift_right_logical i 32) in let s = String.create 8 in let ilo_nat = Int32.to_int ilo in s.[0] <- Char.unsafe_chr ilo_nat; s.[1] <- Char.unsafe_chr (ilo_nat lsr 8); s.[2] <- Char.unsafe_chr (ilo_nat lsr 16); s.[3] <- Char.unsafe_chr (Int32.to_int (Int32.shift_right_logical ilo 24)); let ihi_nat = Int32.to_int ihi in s.[4] <- Char.unsafe_chr ihi_nat; s.[5] <- Char.unsafe_chr (ihi_nat lsr 8); s.[6] <- Char.unsafe_chr (ihi_nat lsr 16); s.[7] <- Char.unsafe_chr (Int32.to_int (Int32.shift_right_logical ihi 24)); nwrite ch s I now get only about 2 million ops / sec. Commenting out the byte extraction code (i.e., write out garbage): let write_i64 ch i = let ilo = Int64.to_int32 i in let ihi = Int64.to_int32 (Int64.shift_right_logical i 32) in let s = String.create 8 in nwrite ch s with this change, I still get roughly 2 million ops (only slightly faster). This would lead me to believe that time is not spent in doing ALU ops but rather time is spent either in the garbage collector or data cache misses. Feeling optimistic that I had made an optimization over the original write_i64 by calling I/O writes less often (i.e., once as opposed to 8 times), I benchmarked the original write_i64 version. Well, when writing to a string output, the 8x write_byte is faster. When writing to a real file, the nwrite 8 version ends up slightly faster. Isn't there a way to perform the same double_cast using some GC/Ocaml object structure magic like the Obj module? Ideally the write_double should try to aoid calling write_i64 as well, as calling write_i64 will cause the allocation of an Int64.t item. How fast does a write_double have to be? Janne |
From: Janne H. <jjh...@gm...> - 2008-06-02 22:05:38
|
> > Is it possible to match the speed-ups using pure OCaml code? eg. by > > carefully looking at the generated assembler (ocamlopt -S) and > > studying why it might be slow? > > yes I have read the gas code of the extlib version compared to the one of > the > mixed ocaml/C, and even without this just reading the original code it is > easy to understand what makes the difference: If IO module is used to write out to a file, it would sound like the overhead of writing to output would far outweigh benefits of tighter assembly code for writing out doubles. Wouldn't calling write_byte eight times be much more expensive than the few shift instructions? It looks like your C/Ocaml implementation with the ocaml-side string is not thread safe? Perhaps this doesn't happen with the current OCaml run-time, but it looks like if two threads would enter double_cast at the same time, you'd corrupt buf_str? let buf_str = "01234567" external double_cast: buf_str:string -> float -> unit = "double_cast" Janne |
From: Florent M. <fmo...@li...> - 2008-06-02 18:49:16
|
> > Have you decided to make the extlib with only ocaml code, > > without any external C ? > > That's the policy - to make it easier to port extlib to any platform > which supports OCaml. Maybe write_double_opt could be joined along with the extlib only as a patch and a readme explaining the difference. > Is it possible to match the speed-ups using pure OCaml code? eg. by > carefully looking at the generated assembler (ocamlopt -S) and > studying why it might be slow? yes I have read the gas code of the extlib version compared to the one of the mixed ocaml/C, and even without this just reading the original code it is easy to understand what makes the difference: * the mixed ocaml/C copy the double to an ocaml string buffer (allways the same so no alloc at each call) with the C function memcpy, then the string buffer is writen once. This is minimal, no extra operations: let buf_str = "01234567" external double_cast: buf_str:string -> float -> unit = "double_cast" let write_double_opt_native ch d = double_cast ~buf_str d; nwrite ch buf_str CAMLprim value double_cast( value str, value d ) { memcpy((char *)str, (double *)d, sizeof(double)); return Val_unit; } // as you see there is only one line of C code // we have "Int64.bits_of_float" but not any "String.buf_of_float" :,( // I've tryed with the Marshall module, the binary part is at the end // but sometimes for some particular floats there are not the good number // of bytes in total. ; the assembly code of the C part: double_cast: pushl %ebp movl %esp, %ebp movl 12(%ebp), %edx movl 8(%ebp), %ecx movl (%edx), %eax movl %eax, (%ecx) movl 4(%edx), %eax movl %eax, 4(%ecx) movl $1, %eax popl %ebp ret ; and the gas code of the ocaml part: camlIO__write_double_opt_native_422: subl $4, %esp .L557: movl %eax, 0(%esp) pushl %ebx pushl camlIO + 296 movl $double_cast, %eax call caml_c_call .L558: addl $8, %esp movl camlIO + 296, %ebx movl 0(%esp), %eax addl $4, %esp jmp camlIO__nwrite_140 * the original write_double makes a lot of shifts and convertions, here is the list of the extraneous operations done: - 1 x (Int64.bits_of_float) - 3 x (Int64.to_int32) - 1 x (Int64.shift_right_logical) - 4 x (Int32.to_int) - 2 x (Int32.shift_right_logical) - 4 x (lsr) - 8 x (write_byte) (and there is no surprise in the related gas code which only lists all these operations, if interested I've put it at the end of this email, because it's quite long) * even without additional C, it would be possible to make the implementation a bit more concise, but it doesn't enhance the speed very much (just a very little even with 1 million calls): let write_double_ext_native ch f = let bin = Int64.bits_of_float f in let b7 = Int64.to_int(bin) in let b6 = b7 lsr 8 and b5 = b7 lsr 16 and b4 = Int64.to_int(Int64.shift_right_logical bin 24) in let b3 = b4 lsr 8 and b2 = b4 lsr 16 and b1 = Int64.to_int(Int64.shift_right_logical bin 48) in let b0 = b1 lsr 8 in write_byte ch b7; write_byte ch b6; write_byte ch b5; write_byte ch b4; write_byte ch b3; write_byte ch b2; write_byte ch b1; write_byte ch b0 this version is included too in the test tarball that I have provided in my previous email, and you can easily compare it with all the other implementations adding this line in the test script 'test_write_double.sh': time ./test.opt /dev/null -ext -- With Regards Florent -- ; gas code of the current write_double: camlIO__write_double_391: subl $4, %esp .L527: movl %eax, 0(%esp) pushl %ebx movl $caml_int64_bits_of_float, %eax call caml_c_call .L528: addl $4, %esp movl %eax, %ebx movl 0(%esp), %eax addl $4, %esp jmp camlIO__write_i64_385 ; .... camlIO__write_i64_385: subl $8, %esp .L520: movl %eax, 4(%esp) movl %ebx, 0(%esp) pushl %ebx movl $caml_int64_to_int32, %eax call caml_c_call .L521: addl $4, %esp movl %eax, %ebx movl 4(%esp), %eax call camlIO__write_real_i32_380 .L522: pushl $65 movl 4(%esp), %eax pushl %eax movl $caml_int64_shift_right_unsigned, %eax call caml_c_call .L523: addl $8, %esp pushl %eax movl $caml_int64_to_int32, %eax call caml_c_call .L524: addl $4, %esp movl %eax, %ebx movl 4(%esp), %eax addl $8, %esp jmp camlIO__write_real_i32_380 ; ... camlIO__write_real_i32_380: subl $12, %esp .L516: movl %eax, %ecx movl %ecx, 8(%esp) movl 4(%ebx), %eax sall $1, %eax orl $1, %eax movl %eax, 0(%esp) movl 4(%ebx), %ebx shrl $24, %ebx sall $1, %ebx orl $1, %ebx movl %ebx, 4(%esp) andl $511, %eax movl (%ecx), %ebx movl (%ebx), %ecx call *%ecx .L517: movl 0(%esp), %eax shrl $8, %eax orl $1, %eax andl $511, %eax movl 8(%esp), %ebx movl (%ebx), %ebx movl (%ebx), %ecx call *%ecx .L518: movl 0(%esp), %eax shrl $16, %eax orl $1, %eax andl $511, %eax movl 8(%esp), %ebx movl (%ebx), %ebx movl (%ebx), %ecx call *%ecx .L519: movl 4(%esp), %eax andl $511, %eax movl 8(%esp), %ebx movl (%ebx), %ebx movl (%ebx), %ecx addl $12, %esp jmp *%ecx |
From: Richard J. <ri...@an...> - 2008-06-02 17:33:15
|
On Sun, Jun 01, 2008 at 01:15:28PM +0200, Florent Monnier wrote: > Florent Monnier wrote : > > Hi, > > It would be possible to make write_double more than twice faster > > (close to 3 times faster). > > The tradeoff is that there is additional C code. > > > > Cheers > > Have you decided to make the extlib with only ocaml code, > without any external C ? That's the policy - to make it easier to port extlib to any platform which supports OCaml. Is it possible to match the speed-ups using pure OCaml code? eg. by carefully looking at the generated assembler (ocamlopt -S) and studying why it might be slow? Rich. -- Richard Jones Red Hat |
From: Amit D. <ad...@du...> - 2008-06-01 12:57:15
|
Hiya, On Sun, Jun 1, 2008 at 12:15 PM, Florent Monnier <fmo...@li...> wrote: > Have you decided to make the extlib with only ocaml code, > without any external C ? Yes, I think that was the original idea. Best, -Amit |
From: Florent M. <fmo...@li...> - 2008-06-01 11:07:02
|
Florent Monnier wrote : > Hi, > It would be possible to make write_double more than twice faster > (close to 3 times faster). > The tradeoff is that there is additional C code. > > Cheers Have you decided to make the extlib with only ocaml code, without any external C ? Anyway if you wish to test the write_double_opt, grab it here: http://www.linux-nantes.org/~fmonnier/TMP/extlib-1.5.1_write_double_opt.tar.bz2 and to test, just open the tarball and type: sh test_write_double.sh Here are the elapsed times that I get: 0.44 sec. -- the implementation of the current extLib 0.16 sec. -- proposal for inclusion in the extLib distribution 0.14 sec. -- with a higher-order function -- Cheers Florent |
From: Florent M. <fmo...@li...> - 2008-05-29 16:19:01
|
Hi, It would be possible to make write_double more than twice faster (close to 3 times faster). The tradeoff is that there is additional C code. Cheers |
From: Shem <smo...@Ca...> - 2008-05-26 15:28:41
|
>From behind you will be able to reach deep into her http://www.diplexen.com/ |
From: berkes <ciu...@II...> - 2008-05-25 14:52:14
|
Whatever they say, all the ladies dig is a good long pecker http://www.crayenso.com/ |
From: elliott r. <gi...@po...> - 2008-05-23 16:46:43
|
Win At Slots view the best since 1998. |
From: Terry C. <te...@uf...> - 2008-05-22 09:20:59
|
Be good, think twice. Look into that blondy's eyes Catch glance, feel free. Visit that link recommended by ME "Girls Attractor" product Know more at http://www.kautlim.net/a/ |
From: blue s. <blu...@gm...> - 2008-05-22 07:06:58
|
On 5/21/08, David Teller <Dav...@un...> wrote: > What should it do if there is more than one element in the enumeration ? > Raise an error or just consume the first element ? If think consuming the first element only is the saner decision. It's the strategy used by monadic sums, and the C-ish bool_of_int translation (0 -> false, everythin else -> true, so there is an information loss). Besides, it totally makes sense (when you think of option as a failure/success representation) that Option.of_enum (Enum.filter ...) behaves roughly as an Enum.find. > If we have to write a bind, I'd keep it consistent with other > OCaml-based binds, i.e. in the same order as Haskell. And it's the order that pa_monad use. Agreed. Below is a new patch with the usual parameters order. Against option.mli : 32a33,35 > val bind : 'a option -> ('a -> 'b option) -> 'b option > (** [bind None f] returns [None] and [bind (Some x) f] returns [f x] *) > Against option.ml : 26a27,30 > let bind opt f = match opt with > | None -> None > | Some v -> f v > Quick usage test : # let may_sum a b = Option.bind a (fun a_val -> Option.bind b (fun b_val -> Some (a_val + b_val)));; val may_sum : int option -> int option -> int option = <fun> # may_sum (Some 1) (Some 2), may_sum (Some 2) None;; - : int option * int option = (Some 3, None) |
From: David T. <Dav...@un...> - 2008-05-21 21:17:29
|
On Mon, 2008-05-19 at 08:59 +0200, blue storm wrote: > > 2) For [Option.enum], it's not a big deal, just a matter of > > uniformisation wrt other containers. I consider also adding [iter] and > > [filter]. > > Enum already have iter and filter. As it is supposed to provide a > common layer for those kinds of operation, i'm not sure iter and > filter are that useful. An of_enum would be useful, though (is it ok > for those function not to be one to one mapping ?). I'm willing to write [of_enum], if people consider it interesting. What should it do if there is more than one element in the enumeration ? Raise an error or just consume the first element ? > >> What's the use case for Option.enum? > > For example, if you have an enumeration of options, and you want to > "flatten" it into an enumeration of the base type (you're not > interested in the failure cases) : > Enum.concat (Enum.map Option.enum your_option_enum) As usual, you have a point. > Speaking of the Option module, I have a suggestion for a monadic > "bind" function (a celebrity in the Haskell world, would imho be > useful in OCaml too) : [...] If we have to write a bind, I'd keep it consistent with other OCaml-based binds, i.e. in the same order as Haskell. Cheers, David -- David Teller Security of Distributed Systems http://www.univ-orleans.fr/lifo/Members/David.Teller Angry researcher: French Universities need reforms, but the LRU act brings liquidations. |
From: Klusek <rit...@3a...> - 2008-05-21 15:26:49
|
Your lady will be dying to go to bed with you every night http://www.liaberan.com/ |
From: Amit D. <ad...@du...> - 2008-05-19 09:53:56
|
Hi, I think the original idea behind ExtLib is that functions are either highly used and small or fairly commonly used and complicated. I think some people were using it for embedded applications, and are worried about code bloat. get_exn is fairly simple, and as Option.get already returns the fairly general Not_found exception, my impression is that it won't be highly used. That said, there are cases where a new function with a different exception might be handy. E.g. Invalid_argument in array's returning the index number which caused the problem... -Amit On Mon, May 19, 2008 at 10:43 AM, Richard Jones <ri...@an...> wrote: > On Sun, May 18, 2008 at 11:28:08PM +0300, Janne Hellsten wrote: >> What kind of example uses did you have in mind for the Option additions? >> >> Not sure if I agree that get_exn is too useful an addition. > > I disagree: I think that as long as additional functions maintain a > consistent style with the rest of the library, are working, documented > and useful to someone, and don't consume any more "module namespace", > then there shouldn't be any barrier to adding them. > > In this case I have no problem with David's additional functions, the > patches are fine[1], and I would be inclined to add them. > > Rich. > > [1] David: please always post unified diffs as in your second email, > not tarballs as in the first. > > -- > Richard Jones > Red Hat > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > ocaml-lib-devel mailing list > oca...@li... > https://lists.sourceforge.net/lists/listinfo/ocaml-lib-devel > |
From: Richard J. <ri...@an...> - 2008-05-19 09:43:42
|
On Sun, May 18, 2008 at 11:28:08PM +0300, Janne Hellsten wrote: > What kind of example uses did you have in mind for the Option additions? > > Not sure if I agree that get_exn is too useful an addition. I disagree: I think that as long as additional functions maintain a consistent style with the rest of the library, are working, documented and useful to someone, and don't consume any more "module namespace", then there shouldn't be any barrier to adding them. In this case I have no problem with David's additional functions, the patches are fine[1], and I would be inclined to add them. Rich. [1] David: please always post unified diffs as in your second email, not tarballs as in the first. -- Richard Jones Red Hat |
From: baris <unw...@GO...> - 2008-05-19 08:52:55
|
Find out the secret that porn stars use to make themselves humongous. http://www.stimirks.com/ |
From: blue s. <blu...@gm...> - 2008-05-19 07:03:34
|
Sorry, silly mistake : fun b' -> Some (a' + b') On 5/19/08, blue storm <blu...@gm...> wrote: >> 2) For [Option.enum], it's not a big deal, just a matter of >> uniformisation wrt other containers. I consider also adding [iter] and >> [filter]. > > Enum already have iter and filter. As it is supposed to provide a > common layer for those kinds of operation, i'm not sure iter and > filter are that useful. An of_enum would be useful, though (is it ok > for those function not to be one to one mapping ?). > >>> What's the use case for Option.enum? > > For example, if you have an enumeration of options, and you want to > "flatten" it into an enumeration of the base type (you're not > interested in the failure cases) : > Enum.concat (Enum.map Option.enum your_option_enum) > > > > Speaking of the Option module, I have a suggestion for a monadic > "bind" function (a celebrity in the Haskell world, would imho be > useful in OCaml too) : > > let bind f = function > | None -> None > | Some v -> f v > (patches against .ml and .mli attached) > > It looks very much like map, except that the given function decides > wether the result is a failure or a success. > > A typical use case would be the evaluation of an arithmetic AST (eg. a > Sum data type), with an "eval" function that can fail, and thus > returns an int option : > > let rec eval : ast -> int option = function > | Num n -> Some n > | ... > | Sum (a, b) -> Option.bind (fun a' -> Option.bind (fun b' -> a' + b') > (eval b)) (eval a) > > Option.bind allows one to compose possibly-failing operations. > > The parameters order (first the function, then the option) was choosen > to be coherent with the other functions of the Option module. > Haskellers are more used to the reverse order of the >>= operator (an > infix version of bind), with the option first, allowing for a more > natural style : > eval a >>= fun a' -> eval b >>= fun b' -> a' + b' > > I think keeping the old order is a good compromise for now (moreover, > the type is more elegant with the function first). If you're > interested in the >>= syntaxic sugar too, i'd be happy to provide such > an infix operator with reversed parameters. > |