You can subscribe to this list here.
2003 |
Jan
|
Feb
(81) |
Mar
(97) |
Apr
(88) |
May
(80) |
Jun
(170) |
Jul
(9) |
Aug
|
Sep
(18) |
Oct
(58) |
Nov
(19) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(22) |
Feb
(9) |
Mar
(28) |
Apr
(164) |
May
(186) |
Jun
(101) |
Jul
(143) |
Aug
(387) |
Sep
(69) |
Oct
(14) |
Nov
(8) |
Dec
(99) |
2005 |
Jan
(10) |
Feb
(34) |
Mar
(24) |
Apr
(7) |
May
(41) |
Jun
(20) |
Jul
(3) |
Aug
(23) |
Sep
(2) |
Oct
(26) |
Nov
(41) |
Dec
(7) |
2006 |
Jan
(6) |
Feb
(3) |
Mar
(11) |
Apr
|
May
|
Jun
(5) |
Jul
(8) |
Aug
(20) |
Sep
|
Oct
(6) |
Nov
(5) |
Dec
|
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
(3) |
May
(2) |
Jun
|
Jul
|
Aug
(1) |
Sep
(7) |
Oct
(6) |
Nov
(19) |
Dec
(11) |
2008 |
Jan
|
Feb
(7) |
Mar
(9) |
Apr
(21) |
May
(42) |
Jun
(27) |
Jul
(28) |
Aug
(26) |
Sep
(16) |
Oct
(32) |
Nov
(49) |
Dec
(65) |
2009 |
Jan
(35) |
Feb
(20) |
Mar
(36) |
Apr
(42) |
May
(111) |
Jun
(99) |
Jul
(70) |
Aug
(25) |
Sep
(15) |
Oct
(29) |
Nov
(3) |
Dec
(18) |
2010 |
Jan
(10) |
Feb
(4) |
Mar
(57) |
Apr
(63) |
May
(71) |
Jun
(64) |
Jul
(30) |
Aug
(49) |
Sep
(11) |
Oct
(4) |
Nov
(2) |
Dec
(3) |
2011 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2024 |
Jan
(1) |
Feb
(3) |
Mar
(6) |
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(5) |
Oct
|
Nov
|
Dec
|
From: Nicolas C. <war...@fr...> - 2005-05-07 07:03:21
|
> > Dear Ext-lib developers, > > > > This tiny module: > > http://www.eleves.ens.fr/home/frisch/soft.html#memo > > > > implements memoization tables with bounded size. Such a table can be > > used to memoize a function without using a lot of memory. A hash > > function is used to retrieve a previously computed value for a given > > key. The implementation uses a nice and simple heuristic to choose what > > to do in case of collision on the hash value for two keys. > > > > Feel free to add this module to ext-lib if you think it can be useful. > > On a related topic: the Hashtbl module > Would it be a good thing to add functions which do not require to > recompute the hashed key (with Hashtbl.hash), for efficiency purposes: > > val unsafe_find : ('key, 'data) t -> int -> 'key -> 'data > val unsafe_mem : ('key, 'data) t -> int -> 'key -> bool > val unsafe_add : ('key, 'data) t -> int -> 'key -> 'data -> unit This require a lot of "unsafe" functions, and actually they are not "unsafe" at all :) The best way to enforce that is to use a (int,'a) Hashtbl.t and hash your values by hand when needed. Nicolas |
From: Martin J. <mar...@em...> - 2005-05-06 18:47:23
|
On Thu, 5 May 2005, Alain Frisch wrote: > Dear Ext-lib developers, > > This tiny module: > http://www.eleves.ens.fr/home/frisch/soft.html#memo > > implements memoization tables with bounded size. Such a table can be > used to memoize a function without using a lot of memory. A hash > function is used to retrieve a previously computed value for a given > key. The implementation uses a nice and simple heuristic to choose what > to do in case of collision on the hash value for two keys. > > Feel free to add this module to ext-lib if you think it can be useful. On a related topic: the Hashtbl module Would it be a good thing to add functions which do not require to recompute the hashed key (with Hashtbl.hash), for efficiency purposes: val unsafe_find : ('key, 'data) t -> int -> 'key -> 'data val unsafe_mem : ('key, 'data) t -> int -> 'key -> bool val unsafe_add : ('key, 'data) t -> int -> 'key -> 'data -> unit ... For instance, we could write: let get tbl key lazy_data = let h = Hashtbl.hash key in try Hashtbl.unsafe_find tbl h key with Not_found -> let data = Lazy.force lazy_data in Hashtbl.unsafe_add tbl h key data; data Martin -- Martin Jambon, PhD http://martin.jambon.free.fr |
From: Christophe T. <deb...@ti...> - 2005-05-06 12:43:03
|
Hi, Useful additions to DynArray would be [filter] and [filter_in_place]. Also it should be said that one cannot [delete] inside [iteri] -- other implementations might allow that. ChriS |
From: Alain F. <Ala...@in...> - 2005-05-05 19:40:08
|
Dear Ext-lib developers, This tiny module: http://www.eleves.ens.fr/home/frisch/soft.html#memo implements memoization tables with bounded size. Such a table can be used to memoize a function without using a lot of memory. A hash function is used to retrieve a previously computed value for a given key. The implementation uses a nice and simple heuristic to choose what to do in case of collision on the hash value for two keys. Feel free to add this module to ext-lib if you think it can be useful. -- Alain |
From: William D. N. <wne...@cs...> - 2005-04-25 15:58:23
|
On Mon, 25 Apr 2005, Nicolas Cannasse wrote: > I personaly never had the use for Bigints, so I might not the best to decide > whatever extensions are needed or not. Does some other people here already > felt frustrated with current Bigint API ? Well, I rarely use it due to it's excessive verbosity (which could be helped by a separate Infixes module, a la mlgmp) as well as its lack of bitwise operations and features like to/from_string_base, where you can input/output big ints in formats other than decimal. However, I'm more concerned with the fact that it's... well... broken <http://pauillac.inria.fr/bin/caml-bugs/open?id=3586>. William D. Neumann --- "There's just so many extra children, we could just feed the children to these tigers. We don't need them, we're not doing anything with them. Tigers are noble and sleek; children are loud and messy." -- Neko Case Think of XML as Lisp for COBOL programmers. -- Tony-A (some guy on /.) |
From: Nicolas C. <war...@fr...> - 2005-04-25 09:57:46
|
> > If there's any concern about the speed of 'big_int_of_int', perhaps it > > could be optimized by having big_int representations of range > > -128..128 stored in an array and looking these up in case the input > > parameter falls into this range. If the input parameter is out of > > this range, then the generic 'big_int_of_int' would be used. > > Yes, you're right. To be honest I was getting very frustrated trying > to write a Miller-Rabin implementation for ExtBig_int, and was > thinking about moving to Numerix, which is probably what I should have > used in the first place. Only problem is that the GODI distribution > on my laptop is having upgrade problems ... > > I'll release what I have at some point and we can discuss whether it's > worth adding to extlib or not. > > Rich. I personaly never had the use for Bigints, so I might not the best to decide whatever extensions are needed or not. Does some other people here already felt frustrated with current Bigint API ? Nicolas |
From: Richard J. <ri...@an...> - 2005-04-25 09:47:36
|
On Mon, Apr 25, 2005 at 03:53:31PM +0900, Janne Hellsten wrote: > If there's any concern about the speed of 'big_int_of_int', perhaps it > could be optimized by having big_int representations of range > -128..128 stored in an array and looking these up in case the input > parameter falls into this range. If the input parameter is out of > this range, then the generic 'big_int_of_int' would be used. Yes, you're right. To be honest I was getting very frustrated trying to write a Miller-Rabin implementation for ExtBig_int, and was thinking about moving to Numerix, which is probably what I should have used in the first place. Only problem is that the GODI distribution on my laptop is having upgrade problems ... I'll release what I have at some point and we can discuss whether it's worth adding to extlib or not. Rich. -- Richard Jones, CTO Merjis Ltd. Merjis - web marketing and technology - http://merjis.com Team Notepad - intranets and extranets for business - http://team-notepad.com |
From: Janne H. <jjh...@gm...> - 2005-04-25 06:53:38
|
> Not really sure to be honest. >=20 > In the code I was originally writing I kept on having to define > constants I was using at the top level - to avoid having to frequently > write '(big_int_of_int 2)' all over the place. Whether it makes a > real difference or not I don't know. The reason I was asking, is that the majority of the API consists of these constants and IMHO brings some redundancy into the API. By removing these constants we would also nullify the need to argue about which constants in the range -100..100 are the most precious to developers ;) If there's any concern about the speed of 'big_int_of_int', perhaps it could be optimized by having big_int representations of range -128..128 stored in an array and looking these up in case the input parameter falls into this range. If the input parameter is out of this range, then the generic 'big_int_of_int' would be used. Best regards, Janne |
From: Richard J. <ri...@an...> - 2005-04-25 02:51:14
|
On Mon, Apr 25, 2005 at 11:27:29AM +0900, Janne Hellsten wrote: > Hi, > > > val one_big_int : big_int > > val two_big_int : big_int > ... > > val hundred_big_int : big_int > > Is there some reason (other than optimization) for not using > big_int_of_int instead? Not really sure to be honest. In the code I was originally writing I kept on having to define constants I was using at the top level - to avoid having to frequently write '(big_int_of_int 2)' all over the place. Whether it makes a real difference or not I don't know. Rich. -- Richard Jones, CTO Merjis Ltd. Merjis - web marketing and technology - http://merjis.com Team Notepad - intranets and extranets for business - http://team-notepad.com |
From: Janne H. <jjh...@gm...> - 2005-04-25 02:27:38
|
Hi, > val one_big_int : big_int > val two_big_int : big_int ... > val hundred_big_int : big_int Is there some reason (other than optimization) for not using big_int_of_int instead? Best regards, Janne |
From: Richard J. <ri...@an...> - 2005-04-24 22:53:57
|
A suggestion for ExtBig_int interface below. I'm about 3/4 way through the implementation, which will follow. Comments welcome. Some comments of my own: * Big_int is implemented on top of Nat. I got halfway through the implementation before realising this, but it looks like I might have tackled this at the wrong level. OTOH, Big_int itself provides no way to access the underlying Nat; also I suspect that most people will be using Big_int directly, even for number theoretical / crypto work, because Nat (natural numbers) can be too limiting. * Some of the functions overlap with work done by Numerix / Cryptokit / GMP. My intention wasn't to make this be a competitor to these - just to make Big_ints a bit more useful out of the box. * I added lsl, lsr, land, lor and lxor operators. These are useful but unfortunately the implementation is very slow because the basic Big_int module doesn't really provide any bit-oriented methods. We could get around this by using Obj.magic to break encapsulation, but the implementation would then be both complex and fragile. * My current implementation of Big_int <-> binary representations is slow, also because of the previous point. * Factorization would be nice, but I'm not confident that I can implement a suitable algorithm. Perhaps this area is just too specialised for extlib. Rich. ---------------------------------------------------------------------- (* * ExtBig_int - Additional functions for handling big_ints. * Copyright (C) 2005 Richard W.M. Jones * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version, * with the special exception on linking described in file LICENSE. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA *) (** Additional functions for handing [big_int]s. *) module Big_int : sig type big_int (** The type of big integers. *) (** {6 New Functions} *) val one_big_int : big_int val two_big_int : big_int val three_big_int : big_int val four_big_int : big_int val five_big_int : big_int val six_big_int : big_int val seven_big_int : big_int val eight_big_int : big_int val nine_big_int : big_int val ten_big_int : big_int val sixteen_big_int : big_int val twenty_big_int : big_int val thirty_big_int : big_int val thirty_two_big_int : big_int val fourty_big_int : big_int val fifty_big_int : big_int val sixty_big_int : big_int val sixty_four_big_int : big_int val seventy_big_int : big_int val eighty_big_int : big_int val ninety_big_int : big_int val hundred_big_int : big_int val minus_one_big_int : big_int val minus_two_big_int : big_int val minus_three_big_int : big_int val minus_four_big_int : big_int val minus_five_big_int : big_int val minus_six_big_int : big_int val minus_seven_big_int : big_int val minus_eight_big_int : big_int val minus_nine_big_int : big_int val minus_ten_big_int : big_int val minus_sixteen_big_int : big_int val minus_twenty_big_int : big_int val minus_thirty_big_int : big_int val minus_thirty_two_big_int : big_int val minus_fourty_big_int : big_int val minus_fifty_big_int : big_int val minus_sixty_big_int : big_int val minus_sixty_four_big_int : big_int val minus_seventy_big_int : big_int val minus_eighty_big_int : big_int val minus_ninety_big_int : big_int val minus_hundred_big_int : big_int (** The constant big integers [1], [2], [3], ..., [-1], [-2], [-3], ... . *) val is_zero_big_int : big_int -> bool val is_one_big_int : big_int -> bool val is_minus_one_big_int : big_int -> bool (** These functions return true iff the [big_int] parameter is equal to [0] / [1] / [-1]. *) val compare_int_big_int : int -> big_int -> int (** [compare_int_big_int i bi] compares [int] [i] to [big_int] [bi], returning [0] if [i] and [bi] are numerically equal, or [1] if [i] is greater than [bi], or [-1] if [i] is less than [bi]. *) val is_odd_big_int : big_int -> bool val is_even_big_int : big_int -> bool (** These functions return true iff the [big_int] parameter is odd / even. *) val lsl_big_int : big_int -> int -> big_int (** Logical shift left the [big_int] by a number of [bits]. The result is undefined if [bits < 0]. The sign bit is ignored for the purposes of shifting. *) val lsr_big_int : big_int -> int -> big_int (** Logical shift right the [big_int] by a number of [bits]. The result is undefined if [bits < 0]. The sign bit is ignored for the purposes of shifting. *) val land_big_int : big_int -> big_int -> big_int (** Logical bitwise AND of two [big_int]s. The sign bit is ignored. *) val lor_big_int : big_int -> big_int -> big_int (** Logical bitwise OR of two [big_int]s. The sign bit is ignored. *) val lxor_big_int : big_int -> big_int -> big_int (** Logical bitwise XOR of two [big_int]s. The sign bit is ignored. *) val double_big_int : big_int -> big_int (** [double_big_int n] is equivalent to [lsl_big_int n 1]. *) val half_big_int : big_int -> big_int (** [half_big_int n] is equivalent to [lsr_big_int n 1]. *) val add_two_big_int : big_int -> big_int val sub_two_big_int : big_int -> big_int (** Add / subtract [2] from the [big_int]. *) val power_of_two_big_int : int -> big_int (** [power_of_two_big_int n] returns [2^n] as a [big_int]. *) val binary_of_big_int : big_int -> int list (** [binary_of_big_int n] returns [n] as a list of [0]s and [1]s, representing the shortest binary representation of [n]. The most significant bit is the head of the list. *) val big_int_of_binary : int list -> big_int (** [big_int_of_binary bits]. [bits] should be a list of [0]s and [1]s only. The function returns the big_int of the binary representation from the list. The most significant bit should be first in the list. *) val binary_string_of_big_int : big_int -> string (** [binary_string_of_big_int] returns [n] as a string containing only ['0'] and ['1'] characters, representing the shortest binary representation of [n]. The most significant bit is the first character of the string. *) val big_int_of_binary_string : string -> big_int (** [big_int_of_binary_string bits]. [bits] should be a list of ['0'] and ['1'] characters only. The function returns the big_int of the binary representation in the string. The most significant bit should be first character of the string. *) val random_big_int : ?f:(unit -> int * int) -> int -> big_int (** [random_big_int bits] returns a random big_int of length [bits] bits. The result will be between [0] and [2^bits - 1]. If the optional [~f] parameter is not passed, then the standard OCaml random number generator is used (ie. the function {!Random.bits}). To gain more control over the source of randomness -- for instance to supply random numbers from an external source of entropy -- the caller should supply a function [~f] of type [unit -> int * int]. For each call this function should return [(random, bits)]. The first element of the tuple should be a positive random integer. The second element of the tuple should be the number of bits of randomness contained in the random integer, where [1 <= bits <= 30]. [random_big_int] may raise [Failure] if [~f] returns a negative random integer, or if [bits] is outside the proscribed range. *) val random_prime_big_int : ?f:(unit -> int * int) -> ?prime_test:(big_int -> bool) -> int -> big_int (** [random_prime_big_int bits] returns a random prime in the range [0] to [2^bits - 1]. [bits >= 1] else this function will raise [Invalid_argument]. The optional function parameter [~f] can be used to pass in an external source of entropy, as is described in {!random_big_int}. The optional function parameter [~prime_test] can be used to pass in a primality testing function. By default the function {!is_prime_bit_int} is used. *) val miller_rabin_primality_big_int : big_int -> rounds:int -> bool (** Probabilistic primality test for big int using the Miller-Rabin method. [~rounds] is the number of rounds of testing to perform. The confidence that the number is actually prime if this function returns [true] is [1 - 1 / 4^rounds]. *) val is_prime_big_int : big_int -> bool (** Probabilistic test of primality using a suitable algorithm with reasonable defaults. *) (* val factorize_big_int : big_int -> (big_int * int) list (** Factorize the [big_int] into factors using a suitable algorithm. Returns a list of the unique factors and the multiplicity of each. For example, [factorize_big_int (big_int_of_int 100)] would return the list [[ big_int_of_int 2, 2; big_int_of_int 5, 2 ]], meaning that [100 = 2^2 * 5^2]. Note that factorizing large numbers can be slow. *) *) (** {6 Older Functions} *) (** Please refer to the OCaml Manual for documentation for these functions. *) val zero_big_int : big_int val unit_big_int : big_int val minus_big_int : big_int -> big_int val abs_big_int : big_int -> big_int val add_big_int : big_int -> big_int -> big_int val succ_big_int : big_int -> big_int val add_int_big_int : int -> big_int -> big_int val sub_big_int : big_int -> big_int -> big_int val pred_big_int : big_int -> big_int val mult_big_int : big_int -> big_int -> big_int val mult_int_big_int : int -> big_int -> big_int val square_big_int: big_int -> big_int val sqrt_big_int: big_int -> big_int val quomod_big_int : big_int -> big_int -> big_int * big_int val div_big_int : big_int -> big_int -> big_int val mod_big_int : big_int -> big_int -> big_int val gcd_big_int : big_int -> big_int -> big_int val power_int_positive_int: int -> int -> big_int val power_big_int_positive_int: big_int -> int -> big_int val power_int_positive_big_int: int -> big_int -> big_int val power_big_int_positive_big_int: big_int -> big_int -> big_int val sign_big_int : big_int -> int val compare_big_int : big_int -> big_int -> int val eq_big_int : big_int -> big_int -> bool val le_big_int : big_int -> big_int -> bool val ge_big_int : big_int -> big_int -> bool val lt_big_int : big_int -> big_int -> bool val gt_big_int : big_int -> big_int -> bool val max_big_int : big_int -> big_int -> big_int val min_big_int : big_int -> big_int -> big_int val num_digits_big_int : big_int -> int val string_of_big_int : big_int -> string val big_int_of_string : string -> big_int val big_int_of_int : int -> big_int val is_int_big_int : big_int -> bool val int_of_big_int : big_int -> int val float_of_big_int : big_int -> float (** {6 For internal use} *) val nat_of_big_int : big_int -> Nat.nat val big_int_of_nat : Nat.nat -> big_int val base_power_big_int: int -> int -> big_int -> big_int val sys_big_int_of_string: string -> int -> int -> big_int val round_futur_last_digit : string -> int -> int -> bool val approx_big_int: int -> big_int -> string end -- Richard Jones, CTO Merjis Ltd. Merjis - web marketing and technology - http://merjis.com Team Notepad - intranets and extranets for business - http://team-notepad.com |
From: Alain F. <Ala...@in...> - 2005-03-29 14:31:15
|
Nicolas Cannasse wrote: > Hi list ! > > I took some free time in order to make an ExtLib 1.4 "long awaited" Release > ;) > Most of changes are bugfixes, since the API is not evolving so much > recently. > > I'll wait few days before offical camllist announcement, this way GODI / > Debian Release managers can package it properly. Ok for the GODI package. -- Alain |
From: Nicolas C. <war...@fr...> - 2005-03-24 11:06:09
|
Hi list ! I took some free time in order to make an ExtLib 1.4 "long awaited" Release ;) Most of changes are bugfixes, since the API is not evolving so much recently. I'll wait few days before offical camllist announcement, this way GODI / Debian Release managers can package it properly. Regards, Nicolas Cannasse |
From: Koprowski, A. <A.K...@tu...> - 2005-03-10 10:29:32
|
Hello, I'm looking for some documentation/tutorial/examples on how to use the Event module from the Ocaml threads library. Any suggestions? Anything more informative than http://caml.inria.fr/ocaml/htmlman/libref/Event.html welcome! Thanks a lot in advance! Adam Koprowski =20 ------------------------------------------------------------------------ - - Adam Koprowski (A.K...@tu... <mailto:A.K...@tu...> ) - - Department of Mathematics and Computer Science - - Technical University Eindhoven (TU/e) - - - - The difference between impossible and possible lies in determination - - Tommy Lasorda - ------------------------------------------------------------------------ - =20 |
From: Yaron M. <ym...@gm...> - 2005-03-09 02:54:53
|
I've been playing around with the ExtLib sources, looking at the possibility of adding labeled versions of some of the modules, to parallel the labelled versions in the standard library. In going through the exercise, I noticed some things that are in the standard library that are omitted from ExtLib's versions, and I was wondering if this was accidental. On the whole, it would be nice for ExtLib to be a drop-in replacement, which to me indicates that ExtLib should include the full contents of the standard libraries it's trying to replace. Here's what I've noticed so far: In the List module, rev_map2 is omitted. Also, the unsafe operations are omitted from the String module (and String.get, length, set and create are not built as externals, which I think introduces an unnecessary performance penalty). So are these bugs or purposeful omissions? Yaron |
From: Bardur A. <sp...@sc...> - 2005-03-06 07:30:28
|
Bardur Arantsson wrote: [--snip--] > > The test results are quite interesting. On my Athlon64 > 3500+, with 2GB of memory, a couple of test runs yield: > > $ ./examples/test1.opt > Benchmarks... > 10000000 x VList.(::)... 1.05192 > 10000000 x List.(::)... 2.28799 > 50000 x VList.nth... 0.000813007 > 50000 x List.nth... 14.4831 > 50000 x VList.length... 0.00608492 > 50000 x List.length... 32.4571 > 10000000 x cdr(VList)... 0.176178 > 10000000 x cdr(VList)... 0.116828 > $ ./examples/test1.opt > Benchmarks... > 10000000 x VList.(::)... 0.926367 > 10000000 x List.(::)... 2.01412 > 50000 x VList.nth... 0.00077486 > 50000 x List.nth... 12.8647 > 50000 x VList.length... 0.00628901 > 50000 x List.length... 32.2211 > 10000000 x cdr(VList)... 0.179661 > 10000000 x cdr(VList)... 0.131431 > [--snip--] I've updated the code to avoid NULLs (actually only required minimal changes... which was nice). It's available here: http://www.imada.sdu.dk/~bardur/ocaml-vlist-20050306.tar.bz2 Here's a benchmark with the old code, this time using floats: ./examples/test1.opt Benchmarks... 10000000 x VList.(::)... 3.1112 10000000 x List.(::)... 5.49784 50000 x VList.nth... 0.00185108 50000 x List.nth... 30.003 50000 x VList.length... 0.00599289 50000 x List.length... 63.9696 10000000 x cdr(VList)... 0.144912 10000000 x cdr(VList)... 0.242838 ... and here's the new code: ./examples/test1.opt Benchmarks... 10000000 x VList.(::)... 0.364121 10000000 x List.(::)... 5.17452 50000 x VList.nth... 0.00150108 50000 x List.nth... 29.8347 50000 x VList.length... 0.00717998 50000 x List.length... 64.5293 10000000 x VList.tl... 0.182895 10000000 x VList.tl... 0.246443 Quite a huge improvement for cons which fares *much* better against List in the case of huge lists of floats. I'm not sure if anyone actually uses huge lists of floats, but there you go. :) Other than that the performance is basically the same as before (both in the float case and the non-float case). Cheers, -- Bardur Arantsson <ba...@im...> <ba...@sc...> Just then, Neville caused a slight diversion by turning into a large canary. J.K. Rawling, 'Goblet of Fire' |
From: Bardur A. <sp...@sc...> - 2005-03-06 05:49:37
|
Alain Frisch wrote: > Bardur Arantsson wrote: > >> I think there *might* be a problem with doing this. I don't remember >> the details right now, but I do remember being quite sure it was >> actually necessary to fill up using NULLs instead of copies. >> (Obviously it had something to do with the extra references causing >> objects to hang around for longer than necessary). > > > I've thought about that, and I don't see any problem. Since the "used > part" of the array is read-only, I don't see any problem. If the value > used as an initializer is referenced through any cell of the array, then > the first cell - which still contain the same value - is also referenced > as well. > Yup, you're certainly right about that. The only potential "problem" is in situations like with VList.map where you have references *after* the last_used: |---| \ |----| | . | |-- not x | .. | | . | / | .. | L-> | x | ---map---> | x' | <- L' | x | | ?? | | x | | ?? | |---| |----| Since the reference to L may be dropped later (and there are none before position L), you certainly don't want to set '??' to x; you want to set them to x'. So the "problem" is just that you have to be a bit more careful about initializers when copying the array. But since Array.create will actually "force" you to think about the initializer should be, it shouldn't be a problem in practise. (And besides, I only need to get the code right *once* :)) Thanks for the help, I'll remove all the NULL stuff some time today and make a new snapshot. -- Bardur Arantsson <ba...@im...> <ba...@sc...> It hovered above the ground, much in the way that bricks don't. Douglas Adams, 'Hitchiker's Guide to the Galaxy' |
From: Alain F. <Ala...@in...> - 2005-03-06 00:15:17
|
Bardur Arantsson wrote: > I think there *might* be a problem with doing this. I don't remember the > details right now, but I do remember being quite sure it was actually > necessary to fill up using NULLs instead of copies. (Obviously it had > something to do with the extra references causing objects to hang around > for longer than necessary). I've thought about that, and I don't see any problem. Since the "used part" of the array is read-only, I don't see any problem. If the value used as an initializer is referenced through any cell of the array, then the first cell - which still contain the same value - is also referenced as well. -- Alain |
From: Bardur A. <sp...@sc...> - 2005-03-05 23:58:30
|
Alain Frisch wrote: >>> Why don't you use the element you add (in ::) and an equivalent of >>> Array.map (in map) ? (Array.map as it is would produce a different >>> traversal order >>> for the vlists) >> >> I'm not sure I follow... Once you have a pointer into the list (ie. >> the parameter you're calling VList.map with) everything back from that >> point is read-only, so you don't to construct anything using (::)...?!? > > I mean, instead of putting a dummy value (__NULL__ ()) when you allocate > in new block in (::), you could use the first argument of (::) as the > initializer for the array. And it has the correct type, by definition. > So if you create vlists of floats, the arrays will end up being real > arrays of floats (with inlined floats), not arrays of boxed floats. Ah, got ya. I think there *might* be a problem with doing this. I don't remember the details right now, but I do remember being quite sure it was actually necessary to fill up using NULLs instead of copies. (Obviously it had something to do with the extra references causing objects to hang around for longer than necessary). Actually, I'm not so sure that it is actually necessary, right now. I'll examine it more closely tomorrow. > Really, there is no need to use unsafe features here. Be on the safe > side ! (and it will be more efficient for float vlists) Of course, I'd like to avoid them, but I was sure I needed the NULLs at the time of implementation... that may change tomorrow. :) Cheers, -- Bardur Arantsson <ba...@im...> <ba...@sc...> Thank you very little. |
From: Alain F. <Ala...@in...> - 2005-03-05 23:36:20
|
Bardur Arantsson wrote: >> Also, I'm concerned with your Obj.magic to create dummy elements to >> fill the array. Are you sure it will work with vlists of floats ? > > > A test with three elements suggests, yes. :) Aren't floats heap-allocated? Floats in float arrays are inlined. It might be possible to work with arrays of heap-allocated floats with the kind of magic you use, but I would not recommend to do that. > I think it will work in general because a 'NULL' pointer made through > Obj.magic will be at least as large as any non-heap object (and heap > objects are of course pointed to, so that'll work too). The NULL pointer is really the 0 integer (represented by 1 in memory). >> Why don't you use the element you add (in ::) and an equivalent of >> Array.map (in map) ? (Array.map as it is would produce a different >> traversal order >> for the vlists) > > > I'm not sure I follow... Once you have a pointer into the list (ie. the > parameter you're calling VList.map with) everything back from that point > is read-only, so you don't to construct anything using (::)...?!? I mean, instead of putting a dummy value (__NULL__ ()) when you allocate in new block in (::), you could use the first argument of (::) as the initializer for the array. And it has the correct type, by definition. So if you create vlists of floats, the arrays will end up being real arrays of floats (with inlined floats), not arrays of boxed floats. > I'm also pretty sure the current VList.map does produce a result with > the correct ordering. I don't claim the opposite. I just wanted to say that you could use the same implementation technique as for Array.map to get rid of the __NULL__ in VList.map, but Array.map by itself would give a wrong traversal order. Really, there is no need to use unsafe features here. Be on the safe side ! (and it will be more efficient for float vlists) Cheers, Alain |
From: Bardur A. <sp...@sc...> - 2005-03-05 23:31:07
|
Bardur Arantsson wrote: > Alain Frisch wrote: > I'm also pretty sure the current VList.map does produce a result with > the correct ordering. The resulting VLists are created as in e.g. > [--snip--] Of course I screwed up the indexes... that should have been |-------| --> |-------| --> |-------| -> (NIL) | l'[2] | | l'[4] | | l'[5] | | l'[1] | | l'[3] | l -> | l'[0] | | ..... | -- Bardur Arantsson <ba...@im...> <ba...@sc...> ... and that proves the base case. One down, infinitely many to go. |
From: Bardur A. <sp...@sc...> - 2005-03-05 23:18:26
|
Alain Frisch wrote: >> $ ./examples/test1.opt >> Benchmarks... >> 10000000 x VList.(::)... 1.05192 >> 10000000 x List.(::)... 2.28799 >> 50000 x VList.nth... 0.000813007 >> 50000 x List.nth... 14.4831 >> 50000 x VList.length... 0.00608492 >> 50000 x List.length... 32.4571 >> 10000000 x cdr(VList)... 0.176178 <== VList.tl >> 10000000 x cdr(VList)... 0.116828 <== List.tl > (corrections above, just to make it absolutely clear) > I'm quite surprised that the slowdown for VList.tl is so small, > considering it has to allocate a block. Isn't it the case that most of > the time is actually spent in the loop itself, and no the tl operation ? I guess that could be quite likely. > > The good thing is that iterators (fold, map, iter) can be implemented > without allocating that much. - iter doesn't need to allocate anything - fold doesn't need to allocate anything (I *think*... it may need a logarithmic number of 'pointers') - map of course needs to allocate data blocks, but by using blocks directly it avoids having to allocate lots of (temporary) pointers. > > The paper describes a way to represent a list with a single pointer, > which would avoid allocating for the tl operation. I guess this cannot > be easily implemented in OCaml, since it requires cooperation with the > GC. Do you have any idea about it ? I only briefly considered it, but if one were to do it in pure OCaml it would probably need more magic than I can pull off. :) > > Also, I'm concerned with your Obj.magic to create dummy elements to fill > the array. Are you sure it will work with vlists of floats ? A test with three elements suggests, yes. :) Aren't floats heap-allocated? I think it will work in general because a 'NULL' pointer made through Obj.magic will be at least as large as any non-heap object (and heap objects are of course pointed to, so that'll work too). > Why don't you use the element you add (in ::) and an equivalent of Array.map (in > map) ? (Array.map as it is would produce a different traversal order > for the vlists) I'm not sure I follow... Once you have a pointer into the list (ie. the parameter you're calling VList.map with) everything back from that point is read-only, so you don't to construct anything using (::)...?!? I'm also pretty sure the current VList.map does produce a result with the correct ordering. The resulting VLists are created as in e.g. |-------| --> |-------| --> |-------| -> (NIL) | l'[2] | | l'[5] | | l'[6] | | l'[1] | | l'[4] | l -> | l'[0] | | ..... | (Coincidentally the reason I'm not just calling Array.map, but actually traversing the array backwards (and copying each element individually), is that I want the *application* of the user's closure to be performed in-order as well... if their closure has side-effects this would be the natural way to expect things to work, although I haven't checked to see if List.map actually specifies this.) Cheers, -- Bardur Arantsson <ba...@im...> <ba...@sc...> - Thank God it landed in that smoking crater! Chief Clancy Wiggum, 'The Simpsons' |
From: Alain F. <Ala...@in...> - 2005-03-05 22:39:34
|
Hi, Bardur Arantsson wrote: > Just for the heck of it I implemented the VList data > structure described in > > http://icwww.epfl.ch/publications/documents/IC_TECH_REPORT_200244.pdf Cute. > $ ./examples/test1.opt > Benchmarks... > 10000000 x VList.(::)... 1.05192 > 10000000 x List.(::)... 2.28799 > 50000 x VList.nth... 0.000813007 > 50000 x List.nth... 14.4831 > 50000 x VList.length... 0.00608492 > 50000 x List.length... 32.4571 > 10000000 x cdr(VList)... 0.176178 > 10000000 x cdr(VList)... 0.116828 (The last line is actually List.tl) I'm quite surprised that the slowdown for VList.tl is so small, considering it has to allocate a block. Isn't it the case that most of the time is actually spent in the loop itself, and no the tl operation ? The good thing is that iterators (fold, map, iter) can be implemented without allocating that much. The paper describes a way to represent a list with a single pointer, which would avoid allocating for the tl operation. I guess this cannot be easily implemented in OCaml, since it requires cooperation with the GC. Do you have any idea about it ? Also, I'm concerned with your Obj.magic to create dummy elements to fill the array. Are you sure it will work with vlists of floats ? Why don't you use the element you add (in ::) and an equivalent of Array.map (in map) ? (Array.map as it is would produce a different traversal order for the vlists) -- Alain |
From: Bardur A. <sp...@sc...> - 2005-03-05 11:51:43
|
Hi all, Just for the heck of it I implemented the VList data structure described in http://icwww.epfl.ch/publications/documents/IC_TECH_REPORT_200244.pdf I've only implemented the basic stuff needed to do some benchmarking, but if there is interest I might implement the rest of the List interface. There are two things to note about the implemenetation as it is: - It is not currently thread safe, but achieving thread safety should only require very minor modifications. - For small lists, a considerable amount of memory is wasted, but as the lists grow, the space consumed gets asymptotically closer to that of an array(!). A snapshot can be retrieved from http://www.imada.sdu.dk/~bardur/personal/programs/ocaml-vlist/ocaml-vlist-20050305.tar.bz2 The test results are quite interesting. On my Athlon64 3500+, with 2GB of memory, a couple of test runs yield: $ ./examples/test1.opt Benchmarks... 10000000 x VList.(::)... 1.05192 10000000 x List.(::)... 2.28799 50000 x VList.nth... 0.000813007 50000 x List.nth... 14.4831 50000 x VList.length... 0.00608492 50000 x List.length... 32.4571 10000000 x cdr(VList)... 0.176178 10000000 x cdr(VList)... 0.116828 $ ./examples/test1.opt Benchmarks... 10000000 x VList.(::)... 0.926367 10000000 x List.(::)... 2.01412 50000 x VList.nth... 0.00077486 50000 x List.nth... 12.8647 50000 x VList.length... 0.00628901 50000 x List.length... 32.2211 10000000 x cdr(VList)... 0.179661 10000000 x cdr(VList)... 0.131431 which looks quite good especially considering that it's 100% pure OCaml. Maybe a candidate for ExtLib? -- Bardur Arantsson <ba...@im...> <ba...@sc...> - What is it? The language? Or the nudity, hopefully...? David Letterman, 'The Late Show' |
From: HENRIKSON, J. <JE...@SA...> - 2005-03-02 23:31:28
|
It occurred to me that your site on sourceforge has webcvs so I pulled the one file down and it so far appears to have fixed the problem. Jeff > -----Original Message----- > From: Nicolas Cannasse [mailto:war...@fr...]=20 > Sent: Wednesday, March 02, 2005 11:57 AM > To: HENRIKSON, JEFFREY > Cc: oca...@li... > Subject: Re: [Ocaml-lib-devel] Extlib.DynArray unstable >=20 >=20 > > I downloaded a tarball from sourceforge called > > > > extlib-1.3.tgz > > > > Is this old? I don't see a newer one posted. >=20 > Yes. the bug was reported after 1.3 release >=20 > > Or are you saying I need to download ocaml from CVS? I have their=20 > > 3.08.1 tarball. >=20 > I think one of the two following will fix the bug : update=20 > ocaml from cvs OR update extlib from CVS. The bug itself=20 > comes from ocaml and extlib-cvs have a patch for a workaround=20 > before the bug is fixed ( should be in ocaml-3.08.3 ). >=20 > Nicolas >=20 >=20 |