Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Tim Hochberg <tim.hochberg@ie...>  20020107 18:31:35

Following up myself: you can ignore this last message. By trying several different things and mindless copying of pintobject.h I now have rudimentary float support working in psyco. Only add and subtract are working right now, and conversion from ints to floats is not yet supported, but the following silly function runs 4.5 times faster than base python in my version of Psyco, versus 2.5 times faster in CVS psyco. def f10(): z = 0.0 for i in range(100000): z = z  5.0 y = z + 10. x = y + 10. y = z  10. x = y + 10. y = z + 10. x = y  10. y = z + 10. x = y + 10. x = y + 10. y = z  10. x = y + 10. y = z + 10. x = y + 10. z = z  x return z The next step it to get the other operations working, which should be easy, and get coercion working which I _hope_ will be easy, but I haven't looked into it yet... tim  Original Message  From: "Tim Hochberg" <tim.hochberg@...> To: "Armin Rigo" <arigo@...> Cc: <psycodevel@...> Sent: Saturday, January 05, 2002 2:42 PM Subject: Attempting the 'quick' float solution > > Some time ago Armin wrote: > > >[SNIP] let me first > >describe a quick solution that would probably still give a serious > >speedup to all FP operations. > > I'm attempting to do this on the theory that as I work on the little pieces, > I may absorb enough by osmosis to understand how Psyco actually works. This > may be false, but perhaps I can accomplish something useful anyway. If > someone else is already working on this, let me know and I'll try something > else. I'm sure I'll be asking for guidance repeatedly, but I'll try not to > be too much of a pest... > > These parts are 'done' (they will certainly require some debugging): > > * cimpl_fp_XXX > * PsycoFloat_FROM_DOUBLE() > * PsycoFloat_AS_DOUBLE_1() and PsycoFloat_AS_DOUBLE_2() > > Hmmm.... that list looks awfully short so far ... sigh. Anyway, my question > for today involves ploat_add and friends. Armin suggested: > > >2) metaimplementation: just like we have pint_add()&co, we need > >pfloat_add()&co. A priori, pfloat_add() is something like: > > > >static vinfo_t* pfloat_add(PsycoObject* po, vinfo_t* v, vinfo_t* w) > >{ > > vinfo_t *a1, *a2, *b1, *b2, *x; > > vinfo_array_t* result; > > a1 = PsycoFloat_AS_DOUBLE_1(po, v); > > a2 = PsycoFloat_AS_DOUBLE_2(po, v); > > b1 = PsycoFloat_AS_DOUBLE_1(po, w); > > b2 = PsycoFloat_AS_DOUBLE_2(po, w); > > /* ... */ > >} > > Is there any reason that this can't be done in the same way as pint_add. For > example: > > static vinfo_t* pfloat_add(PsycoObject* po, vinfo_t* v, vinfo_t* w) > { > vinfo_t *a1, *a2, *b1, *b2, *x; > vinfo_array_t* result; > CONVERT_TO_DOUBLE(v, a1, a2); > CONVERT_TO_DOUBLE(w, b1, b2); > /* ... */ > } > > Where CONVERT_TO_DOUBLE would start out as: > > define CONVERT_TO_DOUBLE(vobj, vlng1, vlng2) \ > if (Psyco_TypeSwitch(po, vobj, &psyfs_float) == 0) { \ > PsycoFloat_AS_DOUBLE(po, vobj1, vobj2); \ > if (vlng1 == NULL  vlng2 == NULL) \ > return NULL; \ > } \ > else { \ > if (PycException_Occurred(po)) \ > return NULL; \ > vinfo_incref(psyco_viNotImplemented); \ > return psyco_viNotImplemented; \ > } > > > Eventually it would get extended as we allowed integers and other types to > be converted to floats. psyfs_float would have to be defined in pobject.c > and presumably entered into some array of fixed_switch values somewhere, but > I haven't got that far yet. > > Is this a reasonable way to approach this if my goal is to first get > floatfloat operations working and then to branch out from there, or am I > way off target here.... > > Thanks, > > tim > > 
From: Tim Hochberg <tim.hochberg@ie...>  20020108 17:31:21

I need to take a break from working on the psyco floating point stuff and do some work for which I get paid, but I've gotten it into what seems to be usable shape and uploaded it as a patch on sourceforge if anyone wants to check it out. FWIW, For a suitable tailored function, I've had results around 7.5 times faster than stock python even with this relatively simple approach. tim 
From: Petru Paler <ppetru@pp...>  20020108 19:41:36

On Tue, 20020108 at 19:31, Tim Hochberg wrote: > I need to take a break from working on the psyco floating point stuff and do > some work for which I get paid, but I've gotten it into what seems to be > usable shape and uploaded it as a patch on sourceforge if anyone wants to > check it out. FWIW, For a suitable tailored function, I've had results > around 7.5 times faster than stock python even with this relatively simple > approach. I committed your files to CVS (you should ask Armin for CVS access, btw). I also added bpnn.py in the test directory, it now runs about 14% faster than the pure python version (it was slower before).  Petru Paler, http://www.ppetru.net 
From: Tim Hochberg <tim.hochberg@ie...>  20020108 20:29:56

Thanks for checking that in. [Petru writes] > I committed your files to CVS (you should ask Armin for CVS access, > btw). I also added bpnn.py in the test directory, it now runs about 14% > faster than the pure python version (it was slower before). Interesting. I downloaded it and tried it on my machine (Windows 2000) and I got bpnn running about 50% or more faster under psyco. I wonder if I implemented pow, whether I could speed it up a bit more? Hmmm. Later.... tim 
From: Petru Paler <ppetru@pp...>  20020108 20:34:33

On Tue, 20020108 at 22:29, Tim Hochberg wrote: > Interesting. I downloaded it and tried it on my machine (Windows 2000) and I > got bpnn running about 50% or more faster under psyco. With or without your float support? Anyway, it could be my machine  there's a bunch of unpredictable background stuff running, so... > I wonder if I implemented pow, whether I could speed it up a bit more? Hmmm. > Later....  Petru Paler, http://www.ppetru.net 
From: Tim Hochberg <tim.hochberg@ie...>  20020108 22:07:55

> On Tue, 20020108 at 22:29, Tim Hochberg wrote: > > Interesting. I downloaded it and tried it on my machine (Windows 2000) and I > > got bpnn running about 50% or more faster under psyco. > > With or without your float support? I get about a 1.6x speedup without float support, and about a 1.9x times speedup with it. tim 
From: Kjetil Jacobsen <kjetilja@cs...>  20020108 20:37:21

On Tue, 8 Jan 2002, Tim Hochberg wrote: > Interesting. I downloaded it and tried it on my machine (Windows 2000) and I > got bpnn running about 50% or more faster under psyco. interesting indeed ;) i'm getting a 2.2 times speedup here (linux, debug disabled), so 14% seems a little low.  kjetil 
From: Armin Rigo <arigo@ul...>  20020110 10:08:04

Hello Tim, (sorry for the delay, this message slept in my outbox) Tim Hochberg wrote: > Is there any reason that this can't be done in the same way as pint_add. For > example: > > static vinfo_t* pfloat_add(PsycoObject* po, vinfo_t* v, vinfo_t* w) > (...) > > Where CONVERT_TO_DOUBLE would start out as: > (...) This is perfect. > Eventually it would get extended as we allowed integers and other types to > be converted to floats. Yes. For (short) integers this is easily done by using PsycoInt_AS_LONG(). For long integers you will need to define in plongobject.h a function PsycoLong_AsDouble(), implemented in plongobject.c (a dummy implementation just like PsycoLong_AsLong() is all right). > psyfs_float would have to be defined in pobject.c > and presumably entered into some array of fixed_switch values somewhere, but > I haven't got that far yet. Yes, exactly. The definition is done in pobject.h, where the array is filled with (one) value in psy_object_init(). To recognize inttofloat conversions, you will need a 'psyfs_int_long_float' array instead of 'psyfs_float'. > Is this a reasonable way to approach this if my goal is to first get > floatfloat operations working and then to branch out from there, or am I > way off target here.... No, it is all right. I think it is a very good way to start learning how Psyco works by translating the code of the Python interpreter into the Psyco way. Armin 
Sign up for the SourceForge newsletter:
No, thanks