From: Konrad H. <hi...@cn...> - 2001-11-26 20:51:26
|
> We had some meetings to discuss using blitz and the truth is that as > wrapped by Python there is not much to gain. The efficiency of blitz > comes up when you do an array expression in C++. Then x = y + z + w + a > + b gets compiled into one loop with no temporary objects created. But That could still be of interest to extension module writers. And it seems conceivable to write some limited Python-C compiler for numerical expressions that generates extension modules, although this is more than a weekend project. Still, I agree that what most people care about is the speed of NumPy operations. Some lazy evaluation scheme might be more promising to eliminate the creation of intermediate objects, but that isn't exactly trivial either... Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Chris B. <chr...@ho...> - 2001-11-26 21:03:00
|
"Paul F. Dubois" wrote: > We had some meetings to discuss using blitz and the truth is that as > wrapped by Python there is not much to gain. The efficiency of blitz > comes up when you do an array expression in C++. Then x = y + z + w + a > + b gets compiled into one loop with no temporary objects created. But > this trick is possible because you can bind the assignment. In python > you cannot bind the assignment so you cannot do a lazy evaluation of the > operations, unless you are willing to go with some sort of function call > like x = evaluate(y + z + w). Immediate evaluations means creating > temporaries, and performance is dead. > > The only gain then would be when you passed a Python-wrapped blitz array > back to C++ and did a bunch of operations there. Personally, I think this could be a big gain. At the moment, if you don't get the performance you need with NumPy, you have to write some of your code in C, and using the Numeric and Python C API is a whole lot of work, particularly if you want your function to work on non-contiguous arrays and/or arrays of any type. I don't know much C++, and I have no idea if Blitz++ fits this bill, but it seemed to me that using an object oriented framework that could take care of reference counting, and allow you to work with generic arrays, and index them naturally, etc, would be a great improvement, even if the performance was the same as the current C API. Perhaps NumPy2 has accomplished that, it sounds like it is a step in the right direction, at least. In a sentence: the most important reason for using a C++ object oriented multi-dimensional array package would be easy of use, not speed. It's nice to hear Blitz++ was considered, it was proably rejected for good reason, but it just looked very promising to me. -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Travis O. <oli...@ee...> - 2001-11-26 21:23:49
|
> In a sentence: the most important reason for using a C++ object oriented > multi-dimensional array package would be easy of use, not speed. > > It's nice to hear Blitz++ was considered, it was proably rejected for > good reason, but it just looked very promising to me. I believe that Eric's "compiler" module included in SciPy uses Blitz++ to optimize Numeric expressions. You have others who also share your admiration of Blitz++ -Travis |
From: Chris B. <chr...@ho...> - 2001-11-26 23:30:29
|
Travis Oliphant wrote: > I believe that Eric's "compiler" module included in SciPy uses Blitz++ to > optimize Numeric expressions. You have others who also share your > admiration of Blitz++ Yes, it does. That's where I heard about it. That also brings up a good point. Paul mentioned that using something like Blitz++ would only help performance if you could pass it an entire expression, like: x = a+b+c+d. That is exactly what Eric's compiler module does, and it would sure be easier if NumPy already used Blitz++! In Fact, I suppose Eric's compiler is a start towards a tool that could comp9le en entire NumPy function or module. I'd love to be able to just do that (with some tweeking perhaps) rather than having to code it all by hand. My fantasies continue... -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Joe H. <jh...@oo...> - 2001-12-07 21:01:27
|
Something that everyone should be aware of is that right now we *may* have an opportunity to get significant support. Kodak has acquired RSI, makers of IDL. Most of the planetary astronomy community uses IDL, as do many geophysicists and medical imaging people. Kodak is dramatically raising prices, and has killed support for Mac OS X. The IDL site license just arranged for the group at NASA Ames is over $200k, making site licenses more expensive than individual licenses were just a few years ago, on a per-license basis. At the Division for Planetary Sciences meeting last week, I was frequently approached by colleagues who said, "Joe, what do I do?", and from the more savvy, "Is Python ready yet?" I discussed the possibility of producing an OSS or PD analysis system with a NASA program manager. He sees the money going out of his own programs to Kodak, and is concerned. However, his programs cannot fund such an effort as it is out of his scope. The right program is probably Applied Information Systems Research, but he is checking around NASA HQ to see whether this is the case. He was very positive about the idea. I suspect that a proposal will be more likely to fly this year than next, as there is a sense of concern right now, whereas next year people will already have adjusted. Depending on how my '01 grant proposals turn out, I may be proposing this to NASA in '02. Paul Barrett and I proposed it once before, in 1996 I think, but to the wrong program. Supporting parts of the effort from different sources would be wise. Paul Dubois makes the excellent point that such efforts generally peter out. It would be important to set this up as an OSS project with many contributors, some of whom are paid full-time to design and build the core. Good foundational documents and designs, and community reviews solicited from savvy non-participants, would help ensure that progress continued as sources of funding appeared and disappeared...and that there is enough wider-community support to keep it going until it produces something. NASA's immediate objective will be a complete data-analysis system to replace IDL, in short order, including an IDL-to-python converter program. That shouldn't be hard as IDL is a simple language, and PyDL may have solved much of that problem. So, at this point I'm still assessing what to do and whether/how to do it. Should we put together proposals to the various funding agencies to support SciPy? Should we create something new? What efforts exist in other communities, particularly the numerical analysis community? How much can we rely on savvy users to contribute code, and will that code be any good? My feeling is that there is actually a lot of money available for this, but it will require a few people to give up their jobs and pursue it full-time. And there, as they say, is the rub. --jh-- |