From: Robert K. <rob...@gm...> - 2006-07-01 19:36:34
|
Travis Oliphant wrote: > Charles R Harris wrote: >> Thanks Travis, >> >> Your directions are very helpful and much appreciated. > I placed these instructions at > > http://projects.scipy.org/scipy/numpy/wiki/MakingBranches > > Please make any changes needed to that wiki page. I will add (here as well as the wiki) that using the svnmerge tool to be enormously helpful in maintaining branches. http://www.dellroad.org/svnmerge/index Among other things, it makes merge commit messages with the contents of the individual commit messages, so history isn't lost when changes are merged back into the trunk. Here is how I tend to set things up for bidirectional merging: (untested with this specific example, though) $ cd ~/svn/scipy $ svn cp http://svn.scipy.org/svn/scipy/trunk http://svn.scipy.org/svn/scipy/branches/mine $ svnmerge init http://svn.scipy.org/svn/scipy/branches/mine $ svn commit -F svnmerge-commit-message.txt $ svn switch http://svn.scipy.org/svn/scipy/branches/mine $ svnmerge init http://svn.scipy.org/svn/scipy/trunk $ svn commit -F svnmerge-commit-message.txt Then, when you need to pull in changes from the trunk, view them with $ svnmerge avail and pull them in with $ svnmerge merge $ svn ci -F svnmerge-commit-message.txt When you're finally done with the branch, the same procedure on the trunk pulls in all of the (real, not merged in from the trunk) changes you've made to the branch. Also, if you're only going to be making changes in one directory, I've found that it's much easier to simply branch that directory and svn switch just that directory over. That way, you don't have to worry about pulling in everyone else's changes to the rest of the package into the branch. You can just svn up. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: David M. C. <co...@ph...> - 2006-06-30 18:59:32
|
On Fri, 30 Jun 2006 14:42:33 -0400 "Jonathan Taylor" <jon...@ut...> wrote: > +1 for some sort of float. I am a little confused as to why Float64 > is a particularly good choice. Can someone explain in more detail? > Presumably this is the most sensible ctype and translates to a python > float well? It's "float64", btw. Float64 is the old Numeric name. Python's "float" type is a C "double" (just like Python's "int" is a C "long"). In practice, C doubles are 64-bit. In NumPy, these are the same type: float32 == single (32-bit float, which is a C float) float64 == double (64-bit float, which is a C double) Also, some Python types have equivalent NumPy types (as in, they can be used interchangably as dtype arguments): int == long (C long, could be int32 or int64) float == double complex == cdouble (also complex128) Personally, I'd suggest using "single", "float", and "longdouble" in numpy code. [While we're on the subject, for portable code don't use float96 or float128: one or other or both probably won't exist; use longdouble]. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Alan G I. <ai...@am...> - 2006-07-01 16:01:16
|
On Sat, 1 Jul 2006, Ed Schofield apparently wrote: > couldn't we make the transition easier and more robust by > writing compatibility interfaces for zeros, ones, empty, > called e.g. intzeros, intones, intempty I think Robert or Tim suggested int.zeros() etc. fwiw, Alan Isaac |
From: Christopher B. <Chr...@no...> - 2006-06-30 19:17:22
|
Tim Hochberg wrote: > The number one priority for numpy should be to unify the three disparate > Python numeric packages. I think the number one priority should be the best it can be. As someone said, two (or ten) years from now, there will be more new users than users migrating from the older packages. > Personally, given no other constraints, I would probably just get rid of > the defaults all together and make the user choose. I like that too, and it would keep the incompatibility from causing silent errors. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Eric J. <jo...@MI...> - 2006-06-30 19:45:46
|
On Fri, 2006-06-30 at 12:35 -0400, Sasha wrote: > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. Given the coverage is so low, how can people help by contributing unit tests? Are there obvious areas with poor coverage? Travis, do you have any opinions on this? ...Eric |
From: Tim L. <tim...@gm...> - 2006-07-01 00:42:15
|
On 7/1/06, Eric Jonas <jo...@mi...> wrote: > On Fri, 2006-06-30 at 12:35 -0400, Sasha wrote: > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > Given the coverage is so low, how can people help by contributing unit > tests? Are there obvious areas with poor coverage? Travis, do you have > any opinions on this? > ...Eric > > A handy tool for finding these things out is coverage.py. I've found it quite helpful in checking unittest coverage in the past. http://www.nedbatchelder.com/code/modules/coverage.html I don't think I'll have a chance in the immediate future to try it out with numpy, but if someone does, I'm sure it will give some answers to your questions Eric. Cheers, Tim Leslie > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Christopher B. <Chr...@no...> - 2006-06-30 20:40:47
|
Robert Kern wrote: > It's arange(0.0, 1.0, 0.1) that I think causes the most problems with arange and > floats. actually, much to my surprise: >>> import numpy as N >>> N.arange(0.0, 1.0, 0.1) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) But I'm sure there are other examples that don't work out. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Travis O. <oli...@ee...> - 2006-06-30 22:20:53
|
Sasha wrote: >It is not as bad as I thought, but there is certainly room for improvement. > >File `numpy/core/src/multiarraymodule.c' >Lines executed:63.56% of 3290 > >File `numpy/core/src/arrayobject.c' >Lines executed:59.70% of 5280 > >File `numpy/core/src/scalartypes.inc.src' >Lines executed:31.67% of 963 > >File `numpy/core/src/arraytypes.inc.src' >Lines executed:47.35% of 868 > >File `numpy/core/src/arraymethods.c' >Lines executed:57.65% of 739 > > > > > This is great. How did you generate that? This is exactly the kind of thing we need to be doing for the beta release cycle. I would like these numbers very close to 100% by the time 1.0 final comes out at the end of August / first of September. But, we need help to write the unit tests. What happens if you run the scipy test suite? -Travis |
From: Sasha <nd...@ma...> - 2006-06-30 22:31:48
|
On 6/30/06, Travis Oliphant <oli...@ee...> wrote: > This is great. How did you generate [the coverage statistic]? > It was really a hack. I've configured python using $ ./configure --enable-debug CC="gcc -fprofile-arcs -ftest-coverage" CXX="c++ gcc -fprofile-arcs -ftest-coverage" (I hate distutils!) Then I installed numpy and ran numpy.test(). Some linalg related tests failed which should be fixed by figuring out how to pass -fprofile-arcs -ftest-coverage options to the fortran compiler. The only non-obvious step in using gcov was that I had to tell it where to find object files: $ gcov -o build/temp.linux-x86_64-2.4/numpy/core/src numpy/core/src/*.c > ... > What happens if you run the scipy test suite? I don't know because I don't use scipy. Sorry. |
From: Sasha <nd...@ma...> - 2006-07-01 20:28:13
|
I don't see how that will simplify the transition. Convertcode will still need to detect use of the dtype argument (keyword or positional). Simple s/zeros/int.zeros/ will not work. I read Ed's suggestion as retaining current default in intzeros so that intzeros(n, float) is valid. On the other hand Tim's int.zeros would not take dtype argument because dtype is already bound as self. The bottom line: int.zeros will not work and intzeros(n, float) is ugly. I would not mind oldnumeric.zeros, but context aware convertcode is still worth the effort. Let's see how far I will get with that ... On 7/1/06, Alan G Isaac <ai...@am...> wrote: > On Sat, 1 Jul 2006, Ed Schofield apparently wrote: > > couldn't we make the transition easier and more robust by > > writing compatibility interfaces for zeros, ones, empty, > > called e.g. intzeros, intones, intempty > > > I think Robert or Tim suggested int.zeros() etc. > > fwiw, > Alan Isaac > > > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Tim H. <tim...@co...> - 2006-07-01 23:34:37
|
Sasha wrote: > I don't see how that will simplify the transition. Convertcode will > still need to detect use of the dtype argument (keyword or > positional). Simple s/zeros/int.zeros/ will not work. I read Ed's > suggestion as retaining current default in intzeros so that > intzeros(n, float) is valid. On the other hand Tim's int.zeros would > not take dtype argument because dtype is already bound as self. > It's just like a game of telephone! That was Robert's suggestion not mine. What I said was: Personally, given no other constraints, I would probably just get rid of the defaults all together and make the user choose. Since I've been dragged back into this again let me make a quick comment. If we are choosing a floating point default, there are at least two other choices that make as much sense as using float64. The first possibility is to use the same thing that python uses, that is 'float'. On my box and probably most current boxes that turns out to be float64 anyway, but choosing 'float' as the default rather than 'float64' will change the way numpy is expected to behave as hardware and / or Python evolves. The second choice is to use the longest floating point type available on a given platform, that is, 'longfloat'. Again, on my box that is the same as using float64, but on other boxes I suspect it gives somewhat different results. The advantage of using 'float64' as the default is that we can expect programs to run consistently across platforms. The advantage of choosing 'float' is that interactions with Python proper may be less suprising when python's float is not 'float64. The advantage of using 'longfloat' is that it is the safest type to use when interacting with other unknown types. I don't care much which gets chosen, but I think we should know which of these we intend and why. Since there often the same thing at present I have a suspicion that these three cases may be conflated in some people heads. -tim > The bottom line: int.zeros will not work and intzeros(n, float) is > ugly. I would not mind oldnumeric.zeros, but context aware convertcode > is still worth the effort. Let's see how far I will get with that ... > > > > On 7/1/06, Alan G Isaac <ai...@am...> wrote: > >> On Sat, 1 Jul 2006, Ed Schofield apparently wrote: >> >>> couldn't we make the transition easier and more robust by >>> writing compatibility interfaces for zeros, ones, empty, >>> called e.g. intzeros, intones, intempty >>> >> I think Robert or Tim suggested int.zeros() etc. >> >> fwiw, >> Alan Isaac >> >> >> >> >> >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: Lee T. <lt...@ll...> - 2006-07-03 23:15:34
|
On Thu, 29 Jun 2006, Travis Oliphant wrote: > I think it's time for the first beta-release of NumPy 1.0 > > I'd like to put it out within 2 weeks. Please make any comments or > voice major concerns so that the 1.0 release series can be as stable as > possible. One issue I ran across that I have not seen addressed is the namespace of arrayobject.h. I'm not refering to C++ namespaces but prefixing symbols to avoid clashes with user's code. The externals start with PyArray. But I had symbol redefinition errors for byte, MAX_DIMS, and ERR. That is, I already had defines for MAX_DIMS and ERR and a typedef for byte in my code. When adding a numpy interface to my library I had to undef these symbols before including arrayobject.h. Is there a way to move implemention defines, like ERR, into a separate header. Or if they're part of the API, prefix the symbols? Lee Taylor |