You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Erin S. <eri...@gm...> - 2006-06-21 01:25:46
|
The numpy example page still has dtype=3DFloat and dtype=3DInt all over it. Is there a generic replacement for Float, Int or should these be changed to something more specific such as int32? Erin On 6/20/06, Stefan van der Walt <st...@su...> wrote: > Hi Simon > > On Tue, Jun 20, 2006 at 08:22:30PM +0100, Simon Burton wrote: > > > > >>> import numpy > > >>> numpy.__version__ > > '0.9.9.2631' > > >>> numpy.Int32 > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > AttributeError: 'module' object has no attribute 'Int32' > > >>> > > > > This was working not so long ago. > > Int32, Float etc. are part of the old Numeric interface, that you can > now access under the numpy.oldnumeric namespace. If I understand > correctly, doing > > import numpy.oldnumeric as Numeric > > should provide you with a Numeric-compatible replacement. > > The same types can be accessed under numpy as int32 (lower case) and > friends. > > Cheers > St=E9fan > > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Eric F. <ef...@ha...> - 2006-06-20 20:34:17
|
In the course of trying to speed up matplotlib, I did a little experiment that may indicate a place where numpy can be sped up: the creation of a 2-D array from a list of tuples. Using the attached script, I find that numarray is roughly 5x faster than either numpy or Numeric: [efiring@manini tests]$ python test_array.py array size: 10000 2 number of loops: 100 numpy 10.89 numpy2 6.57 numarray 1.77 numarray2 0.76 Numeric 8.2 Numeric2 4.36 [efiring@manini tests]$ python test_array.py array size: 100 2 number of loops: 100 numpy 0.11 numpy2 0.06 numarray 0.03 numarray2 0.01 Numeric 0.08 Numeric2 0.05 The numarray advantage persists for relatively small arrays (100x2; second example) and larger ones (10000x2; first example). In each case, the second test for a given package (e.g., numpy2) is the result with the type of the array element specified in advance, and the first (e.g., numpy) is without such specification. The versions I used are: In [3]:Numeric.__version__ Out[3]:'24.0b2' In [5]:numarray.__version__ Out[5]:'1.4.1' In [7]:numpy.__version__ Out[7]:'0.9.9.2584' Eric |
From: David M. C. <co...@ph...> - 2006-06-20 19:41:47
|
On Tue, 20 Jun 2006 21:05:51 +0200 Thomas Heller <th...@py...> wrote: > Travis Oliphant schrieb: > > I just updated the array interface page to emphasize we now have version > > 3. NumPy still supports objects that expose (the C-side) of version 2 > > of the array interface, though. > > > > The new interface is basically the same except (mostly) for asthetics: > > The differences are listed at the bottom of > > > > http://numeric.scipy.org/array_interface.html > > > > There is talk of ctypes supporting the new interface which is a worthy > > development. Please encourage that if you can. > > > > Please voice concerns now if you have any. > > From http://numeric.scipy.org/array_interface.html: > """ > New since June 16, 2006: > For safety-checking the return object from PyCObject_GetDesc(obj) should > be a Python Tuple with the first object a Python string containing > "PyArrayInterface Version 3" and whose second object is a reference to > the object exposing the array interface (i.e. self). > > Older versions of the interface used the "desc" member of the PyCObject > itself (do not confuse this with the "descr" member of the > PyArrayInterface structure above --- they are two separate things) to > hold the pointer to the object exposing the interface, thus you should > make sure the object returned is a Tuple before assuming it is in a > sanity check. > > In a sanity check it is recommended to only check for "PyArrayInterface > Version" and not for the actual version number so that later versions > will still be compatible. The old sanity check for the integer 2 in the > first field is no longer necessary (but it is necessary to place the > number 2 in that field so that objects reading the old version of the > interface will still understand this one). > """ > > I know that you changed that because of my suggestions, but I don't > think it should stay like this. > > The idea was to have the "desc" member of the PyCObject a 'magic value' > which can be used to determine that the PyCObjects "void *cobj" pointer > really points to a PyArrayInterface structure. I have seen PyCObject > uses before in this way, but I cannot find them any longer. > > If current implementations of the array interface use this pointer for > other things (like keeping a reference to the array object), that's > fine, and I don't think the specification should change. I think it is > espscially dangerous to assume that the desc pointer is a PyObject > pointer, Python will segfault if it is not. > I suggest that you revert this change. When I initially proposed the C version of the array interface, I suggested using a magic number, like 0xDECAF (b/c it's lightweight :-) as the first member of the CObject. Currenty, we use a version number, but I believe that small integers would be more common in random CObjects than a magic number. We could do similiar, using 0xDECAF003 for version 3, for instance. That would keep most of the benefits of an explicit "this is an array interface" CObject token, but is lighter to check, and doesn't impose any constraints on implementers for their desc fields. One of the design goals for the C interface was speed; doing a check that the first member of a tuple begins with a certain string slows it down. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Travis O. <oli...@ie...> - 2006-06-20 19:27:23
|
Thomas Heller wrote: > Travis Oliphant schrieb: >> I just updated the array interface page to emphasize we now have >> version 3. NumPy still > > If current implementations of the array interface use this pointer for > other things (like keeping a reference to the array object), that's > fine, and I don't think the specification should change. I think it is > espscially dangerous to assume that the desc pointer is a PyObject > pointer, Python will segfault if it is not. You make a good point. This is not a very safe sanity check and overly complicated for not providing safety. I've reverted it back but left in the convention that the 'desc' pointer contain a reference to the object exposing the interface as is the practice now. Thanks for the review. -Travis |
From: Thomas H. <th...@py...> - 2006-06-20 19:06:00
|
Travis Oliphant schrieb: > I just updated the array interface page to emphasize we now have version > 3. NumPy still supports objects that expose (the C-side) of version 2 > of the array interface, though. > > The new interface is basically the same except (mostly) for asthetics: > The differences are listed at the bottom of > > http://numeric.scipy.org/array_interface.html > > There is talk of ctypes supporting the new interface which is a worthy > development. Please encourage that if you can. > > Please voice concerns now if you have any. From http://numeric.scipy.org/array_interface.html: """ New since June 16, 2006: For safety-checking the return object from PyCObject_GetDesc(obj) should be a Python Tuple with the first object a Python string containing "PyArrayInterface Version 3" and whose second object is a reference to the object exposing the array interface (i.e. self). Older versions of the interface used the "desc" member of the PyCObject itself (do not confuse this with the "descr" member of the PyArrayInterface structure above --- they are two separate things) to hold the pointer to the object exposing the interface, thus you should make sure the object returned is a Tuple before assuming it is in a sanity check. In a sanity check it is recommended to only check for "PyArrayInterface Version" and not for the actual version number so that later versions will still be compatible. The old sanity check for the integer 2 in the first field is no longer necessary (but it is necessary to place the number 2 in that field so that objects reading the old version of the interface will still understand this one). """ I know that you changed that because of my suggestions, but I don't think it should stay like this. The idea was to have the "desc" member of the PyCObject a 'magic value' which can be used to determine that the PyCObjects "void *cobj" pointer really points to a PyArrayInterface structure. I have seen PyCObject uses before in this way, but I cannot find them any longer. If current implementations of the array interface use this pointer for other things (like keeping a reference to the array object), that's fine, and I don't think the specification should change. I think it is espscially dangerous to assume that the desc pointer is a PyObject pointer, Python will segfault if it is not. I suggest that you revert this change. Thomas |
From: Francesc A. <fa...@ca...> - 2006-06-20 16:33:41
|
A Dimarts 20 Juny 2006 18:17, George Christianson va escriure: > Good morning, Thank you, but here the sun is about to set ;-) > I used the Windows installer to install Python 2.4.3 on a late-model Dell > PC running XP Pro. Then I installed numpy-0.9.8 and scipy-0.4.9, also fr= om > the Windows installers. Now I am trying to build a dll file for a Fortran > 77 file and previously-generated (Linux) pyf file. I installed MinGW fr= om > the MinGW 5.0.2 Windows installer, and modified my Windows path to put the > MinGW directory before a pre-existing Cygwin installation. However, both= a > setup.py file and running the C:\python2.4.3\Scripts\f2py.py file in the > Windows command line fail with the message that the .NET Framework SDK has > to be initialized or that the msvccompiler cannot be found. > Any advice on what I'm missing would be much appreciated! Here is the > message I get trying to run f2py: > Mmm, perhaps you can try with putting: [build] compiler=3Dmingw32 in your local distutils.cfg (see=20 http://docs.python.org/inst/config-syntax.html) HTH, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: George C. <chr...@ll...> - 2006-06-20 16:17:28
|
Good morning, I used the Windows installer to install Python 2.4.3 on a late-model Dell PC running XP Pro. Then I installed numpy-0.9.8 and scipy-0.4.9, also from the Windows installers. Now I am trying to build a dll file for a Fortran 77 file and previously-generated (Linux) pyf file. I installed MinGW from the MinGW 5.0.2 Windows installer, and modified my Windows path to put the MinGW directory before a pre-existing Cygwin installation. However, both a setup.py file and running the C:\python2.4.3\Scripts\f2py.py file in the Windows command line fail with the message that the .NET Framework SDK has to be initialized or that the msvccompiler cannot be found. Any advice on what I'm missing would be much appreciated! Here is the message I get trying to run f2py: C:\projects\workspace\MARSFortran>C:\python2.4.3\python C:\python2.4.3\Scripts\f 2py.py -c --fcompiler=g77 mars.pyf mars.f>errors error: The .NET Framework SDK needs to be installed before building extensions f or Python. C:\projects\workspace\MARSFortran> C:\projects\workspace\MARSFortran>type errors Unknown vendor: "g77" running build running config_fc running build_src building extension "mars" sources creating c:\docume~1\christ~1\locals~1\temp\tmp2lu8bh creating c:\docume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 f2py options: [] f2py: mars.pyf Reading fortran codes... Reading file 'mars.pyf' (format:free) SNIP copying C:\python2.4.3\lib\site-packages\numpy\f2py\src\fortranobject.c -> c:\do cume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 copying C:\python2.4.3\lib\site-packages\numpy\f2py\src\fortranobject.h -> c:\do cume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 running build_ext No module named msvccompiler in numpy.distutils, trying from distutils.. Thanks in advance, George Christianson |
From: Tim H. <tim...@co...> - 2006-06-20 14:30:44
|
Tim Hochberg wrote: > Johannes Loehnert wrote: > >> Hi, >> >> >> >>> ## Output: >>> numpy.__version__: 0.9.8 >>> y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>> y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] >>> z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>> z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 >>> 8.10000000e+01 >>> 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 >>> 4.09600000e+03 6.56100000e+03] >>> >> >> >> obviosly the last is z**4. dtypes are the same for y and z (float64). >> >> > I ran into this yesterday and fixed it. It should be OK in SVN now. > >> One addition: >> >> In [5]: z = arange(10, dtype=float) >> >> In [6]: z **= 1 >> >> In [7]: z >> zsh: 18263 segmentation fault ipython >> >> > This one is still there however. I'll look at it. Nevermind, Travis beat me to it. This is fixed in SVN as well. -tim > > -tim > > >> >> - Johannes >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> > > |
From: Tim H. <tim...@co...> - 2006-06-20 12:28:46
|
Johannes Loehnert wrote: >Hi, > > > >>## Output: >>numpy.__version__: 0.9.8 >>y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] >>z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 >> 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 >> 4.09600000e+03 6.56100000e+03] >> >> > >obviosly the last is z**4. dtypes are the same for y and z (float64). > > I ran into this yesterday and fixed it. It should be OK in SVN now. >One addition: > >In [5]: z = arange(10, dtype=float) > >In [6]: z **= 1 > >In [7]: z >zsh: 18263 segmentation fault ipython > > This one is still there however. I'll look at it. -tim > >- Johannes > > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Stefan v. d. W. <st...@su...> - 2006-06-20 10:41:12
|
Hi Simon On Tue, Jun 20, 2006 at 08:22:30PM +0100, Simon Burton wrote: > > >>> import numpy > >>> numpy.__version__ > '0.9.9.2631' > >>> numpy.Int32 > Traceback (most recent call last): > File "<stdin>", line 1, in ? > AttributeError: 'module' object has no attribute 'Int32' > >>>=20 >=20 > This was working not so long ago. Int32, Float etc. are part of the old Numeric interface, that you can now access under the numpy.oldnumeric namespace. If I understand correctly, doing import numpy.oldnumeric as Numeric should provide you with a Numeric-compatible replacement. The same types can be accessed under numpy as int32 (lower case) and friends. Cheers St=E9fan |
From: Simon B. <si...@ar...> - 2006-06-20 10:23:23
|
>>> import numpy >>> numpy.__version__ '0.9.9.2631' >>> numpy.Int32 Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'module' object has no attribute 'Int32' >>> This was working not so long ago. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com |
From: Travis O. <oli...@ie...> - 2006-06-20 09:24:47
|
Matthieu Perrot wrote: > hi, > > I need to handle strings shaped by a numpy array whose data own to a C > structure. There is several possible answers to this problem : > 1) use a numpy array of strings (PyArray_STRING) and so a (char *) object > in C. It works as is, but you need to define a maximum size to your strings > because your set of strings is contiguous in memory. > 2) use a numpy array of objects (PyArray_OBJECT), and wrap each «C string» > with a python object, using PyStringObject for example. Then our problem is > that there is as wrapper as data element and I believe data can't be shared > when your created PyStringObject using (char *) thanks to > PyString_AsStringAndSize by example. > > > Now, I will expose a third way, which allow you to use no size-limited strings > (as in solution 1.) and don't create wrappers before you really need it > (on demand/access). > > First, for convenience, we will use in C, (char **) type to build an array of > string pointers (as it was suggested in solution 2). Now, the game is to > make it works with numpy API, and use it in python through a python array. > Basically, I want a very similar behabiour than arrays of PyObject, where > data are not contiguous, only their address are. So, the idea is to create > a new array descr based on PyArray_OBJECT and change its getitem/setitem > functions to deals with my own data. > > I exepected numpy to work with this convenient array descr, but it fails > because PyArray_Scalar (arrayobject.c) don't call descriptor getitem function > (in PyArray_OBJECT case) but call 2 lines which have been copy/paste from > the OBJECT_getitem function). Here my small patch is : > replace (arrayobject.c:983-984): > Py_INCREF(*((PyObject **)data)); > return *((PyObject **)data); > by : > return descr->f->getitem(data, base); > > I play a lot with my new numpy array after this change and noticed that a lot > of uses works : > This is an interesting solution. I was not considering it, though, and so I'm not surprised you have problems. You can register new types but basing them off of PyArray_OBJECT can be problematic because of the special-casing that is done in several places to manage reference counting. You are supposed to register your own data-types and get your own typenumber. Then you can define all the functions for the entries as you wish. Riding on the back of PyArray_OBJECT may work if you are clever, but it may fail mysteriously as well because of a reference count snafu. Thanks for the tests and bug-reports. I have no problem changing the code as you suggest. -Travis |
From: Travis O. <oli...@ie...> - 2006-06-20 09:06:22
|
C-API support for numarray is now checked in to NumPy SVN. With this support you should be able to compile numarray extensions by changing the include line from numarray/libnumarray.h to numpy/libnumarray.h You will also need to change the include directories used in compiling by appending the directories returned by numpy.numarray.util.get_numarray_include_dirs() This is most easily done using a numpy.distutils.misc_util Configuration instance: config.add_numarray_include_dirs() The work is heavily based on numarray. I just grabbed the numarray sources and translated the relevant functions to use NumPy's ndarray's. Please report problems and post patches. -Travis |
From: Alan G I. <ai...@am...> - 2006-06-20 07:08:01
|
I think the distance matrix version below is about as good=20 as it gets with these basic strategies. fwiw, Alan Isaac def dist(A,B): rowsA, rowsB =3D A.shape[0], B.shape[0] distanceAB =3D empty( [rowsA,rowsB] , dtype=3Dfloat) if rowsA <=3D rowsB: temp =3D empty_like(B) for i in range(rowsA): #store A[i]-B in temp subtract( A[i], B, temp ) temp *=3D temp sqrt( temp.sum(axis=3D1), distanceAB[i,:]) else: temp =3D empty_like(A) for j in range(rowsB): #store A-B[j] in temp temp =3D subtract( A, B[j], temp ) temp *=3D temp sqrt( temp.sum(axis=3D1), distanceAB[:,j]) return distanceAB |
From: wzoj1ql <k0...@td...> - 2006-06-20 06:30:39
|
Good Day Free online medical consultation by a licensed U.S. physician. Click The Link Below real-meds.com ojzxkyxivn XqSNnRykZfHiqFzjkVcICbxqIvRddp blackbody initials harmlessly frugally crowing comparator distorts correlation Wilmington canons conferee blackbody expertly alkane brouhaha correlation Gloria colonials encyclopedia's follows marines freewheel Melinda bricks dusting Matthews Alabamian loathing |
From: Johannes L. <a.u...@gm...> - 2006-06-20 06:09:00
|
Hi, > ## Output: > numpy.__version__: 0.9.8 > y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] > z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 > 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 > 4.09600000e+03 6.56100000e+03] obviosly the last is z**4. dtypes are the same for y and z (float64). One addition: In [5]: z = arange(10, dtype=float) In [6]: z **= 1 In [7]: z zsh: 18263 segmentation fault ipython - Johannes |
From: Alan G I. <ai...@am...> - 2006-06-20 05:10:46
|
I think there is a bug in the **=3D operator, for dtype=3Dfloat. Alan Isaac ## Script: import numpy print "numpy.__version__: ", numpy.__version__ ''' Illustrate a strange bug: ''' y =3D numpy.arange(10,dtype=3Dfloat) print "y: ",y y *=3D y print "y**2: ",y z =3D numpy.arange(10,dtype=3Dfloat) print "z: ", z z **=3D 2 print "z**2: ", z ## Output: numpy.__version__: 0.9.8 y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 4.09600000e+03 6.56100000e+03] |
From: Andrew S. <str...@as...> - 2006-06-20 05:09:14
|
David Cournapeau wrote: > That's great. Last week, I sended several messages to the list > regarding your messages about debian packages for numpy, but it looks > they were lost somewhere.... > > Right now, I use the experimental package of debian + svn sources for > numpy, and it works well. Is your approach based on this work, or is > it totally different (on debian/ubuntu, packaging numpy + atlas should > be easy, as the atlas+lapack library is compiled such as to be complete), > > David Hi David, I did get your email last week (sorry for not replying sooner). I'm actually using my own tool "stdeb" to build these at the moment -- the 'official' package in experimental is surely better than mine, and I will probably switch to it over stdeb sooner or later... Cheers! Andrew |
From: David C. <da...@ar...> - 2006-06-20 04:18:21
|
Andrew Straw wrote: > I have updated the apt repository I maintain for Ubuntu's Dapper, which > now includes: > > numpy > matplotlib > scipy > > Each package is from a recent SVN checkout and should thus be regarded > as "bleeding edge". The repository has a new URL: > http://debs.astraw.com/dapper/ I intend to keep this repository online > for an extended duration. If you want to put this repository in your > sources list, you need to add the following lines to /etc/apt/sources.list:: > deb http://debs.astraw.com/ dapper/ > deb-src http://debs.astraw.com/ dapper/ > > I have not yet investigated the use of ATLAS in building or using the > numpy binaries, and if performance is critical for you, please evaluate > speed before using it. I intend to visit this issue, but I cannot say when. > > The Debian source packages were generated using stdeb, [ > http://stdeb.python-hosting.com/ ] a Python to Debian source package > conversion utility I wrote. stdeb does not build packages that follow > the Debian Python Policy, so the packages here may be slighly unusual > compared to Python packages in the official Debian or Ubuntu > repositiories. For example, example scripts do not get installed, and no > documentation is installed. Future releases of stdeb may resolve these > issues. > > As always, feedback is very appreciated. > > That's great. Last week, I sended several messages to the list regarding your messages about debian packages for numpy, but it looks they were lost somewhere.... Right now, I use the experimental package of debian + svn sources for numpy, and it works well. Is your approach based on this work, or is it totally different (on debian/ubuntu, packaging numpy + atlas should be easy, as the atlas+lapack library is compiled such as to be complete), David |
From: George N. <gn...@go...> - 2006-06-19 22:15:13
|
On 19/06/06, Berthold H=F6llmann <bh...@de...> wrote: > "George Nurser" <gn...@go...> writes: > > > I have run into a strange problem with the current numpy/f2py (f2py > > 2_2631, numpy 2631). > > I have a file [Wright.f] which contains 5 different fortran > > subroutines. Arguments have been specified as input or output by > > adding cf2py intent (in), (out) etc. > > > > Doing > > f2py -c Wright.f -m Wright.so > > simply try > > f2py -c Wright.f -m Wright > > instead. Python extension modules require the an exported routine > named init<module name> (initWright in this case). But you told f2py > to generate an extension module named "so" in a package named > "Wright", so the generated function is named initso. The *.so file > cannot be renamed because then there is no more matching init function > anymore. > > Regards > Berthold Stupid of me! Hit head against wall. Yes, I eventually worked out that f2py -c Wright.f -m Wright was OK. But many thanks for the explanation ....I see, what f2py was doing was perfectly logical. Regards, George. |
From: Tim H. <tim...@co...> - 2006-06-19 21:39:32
|
Tim Hochberg wrote: >Sebastian Beca wrote: > > > >>I just ran Alan's script and I don't get consistent results for 100 >>repetitions. I boosted it to 1000, and ran it several times. The >>faster one varied alot, but both came into a ~ +-1.5% difference. >> >>When it comes to scaling, for my problem(fuzzy clustering), N is the >>size of the dataset, which should span from thousands to millions. C >>is the amount of clusters, usually less than 10, and K the amount of >>features (the dimension I want to sum over) is also usually less than >>100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, >>N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results >>were: >>dist_beca: 1.1, 4.5, 16, 28, 37 >>dist_loehner1: 1.7, 6.5, 22, 35, 47 >> >>I also tried scaling across K, with C=3, N=2500, and K=5-50. I >>couldn't get any consistent results for small K, but both tend to >>perform as well (+-2%) for large K (K>15). >> >>I'm not sure how these work in the backend so I can't argument as to >>why one should scale better than the other. >> >> >> >> >The reason I suspect that dist_beca should scale better is that >dist_loehner1 generates an intermediate array of size NxCxK, while >dist_beca produces intermediate matrices that are only NxK or CxK. For >large problems, allocating that extra memory and fetching it into and >out of the cache can be a bottleneck. > >Here's another version that allocates even less in the way of >temporaries at the expenses of being borderline incomprehensible. It >still allocates an NxK temporary array, but it allocates it once ahead >of time and then reuses it for all subsequent calculations. Your welcome >to use it, but I'm not sure I'd recomend it unless this function is >really a speed bottleneck as it could end up being hard to read later (I >left implementing the N<C case as an exercise to the reader....). > >I have another idea that might reduce the memory overhead still further, >if I get a chance I'll try it out and let you know if it results in a >further speed up. > >-tim > > > def dist2(A, B): > d = zeros([N, C], dtype=float) > if N < C: > raise NotImplemented > else: > tmp = empty([N, K], float) > tmp0 = tmp[:,0] > rangek = range(1,K) > for j in range(C): > subtract(A, B[j], tmp) > tmp *= tmp > for k in rangek: > tmp0 += tmp[:,k] > sqrt(tmp0, d[:,j]) > return d > > Speaking of scaling: I tried this with K=25000 (10 x greater than Sebastian's original numbers). Much to my suprise it performed somewhat worse than the Sebastian's dist() with large K. Below is a modified dist2 that performs about the same (marginally better here) for large K as well as a dist3 that performs about 50% better at both K=2500 and K=25000. -tim def dist2(A, B): d = empty([N, C], dtype=float) if N < C: raise NotImplemented else: tmp = empty([N, K], float) tmp0 = tmp[:,0] for j in range(C): subtract(A, B[j], tmp) tmp **= 2 d[:,j] = sum(tmp, axis=1) sqrt(d[:,j], d[:,j]) return d def dist3(A, B): d = zeros([N, C], dtype=float) rangek = range(K) if N < C: raise NotImplemented else: tmp = empty([N], float) for j in range(C): for k in rangek: subtract(A[:,k], B[j,k], tmp) tmp **= 2 d[:,j] += tmp sqrt(d[:,j], d[:,j]) return d > > > >>Regards, >> >>Sebastian. >> >>On 6/19/06, Alan G Isaac <ai...@am...> wrote: >> >> >> >> >>>On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: >>> >>> >>> >>> >>> >>>>Alan G Isaac wrote: >>>> >>>> >>>> >>>> >>>>>On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>>>> >>>>> >>>>> >>>>> >>>>>>def dist(): >>>>>>d = zeros([N, C], dtype=float) >>>>>>if N < C: for i in range(N): >>>>>>xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>>>>>return d >>>>>>else: >>>>>>for j in range(C): >>>>>>xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>>>>>return d >>>>>> >>>>>> >>>>>> >>>>>> >>>>>But that is 50% slower than Johannes's version: >>>>> >>>>> >>>>>def dist_loehner1(): >>>>> d = A[:, newaxis, :] - B[newaxis, :, :] >>>>> d = sqrt((d**2).sum(axis=2)) >>>>> return d >>>>> >>>>> >>>>> >>>>> >>>>Are you sure about that? I just ran it through timeit, using Sebastian's >>>>array sizes and I get Sebastian's version being 150% faster. This >>>>could well be cache size dependant, so may vary from box to box, but I'd >>>>expect Sebastian's current version to scale better in general. >>>> >>>> >>>> >>>> >>>No, I'm not sure. >>>Script attached bottom. >>>Most recent output follows: >>>for reasons I have not determined, >>>it doesn't match my previous runs ... >>>Alan >>> >>> >>> >>> >>> >>>>>>execfile(r'c:\temp\temp.py') >>>>>> >>>>>> >>>>>> >>>>>> >>>dist_beca : 3.042277 >>>dist_loehner1: 3.170026 >>> >>> >>>################################# >>>#THE SCRIPT >>>import sys >>>sys.path.append("c:\\temp") >>>import numpy >>> >>> >>>from numpy import * >> >> >>>import timeit >>> >>> >>>K = 10 >>>C = 2500 >>>N = 3 # One could switch around C and N now. >>>A = numpy.random.random( [N, K] ) >>>B = numpy.random.random( [C, K] ) >>> >>># beca >>>def dist_beca(): >>> d = zeros([N, C], dtype=float) >>> if N < C: >>> for i in range(N): >>> xy = A[i] - B >>> d[i,:] = sqrt(sum(xy**2, axis=1)) >>> return d >>> else: >>> for j in range(C): >>> xy = A - B[j] >>> d[:,j] = sqrt(sum(xy**2, axis=1)) >>> return d >>> >>>#loehnert >>>def dist_loehner1(): >>> # drawback: memory usage temporarily doubled >>> # solution see below >>> d = A[:, newaxis, :] - B[newaxis, :, :] >>> # written as 3 expressions for more clarity >>> d = sqrt((d**2).sum(axis=2)) >>> return d >>> >>> >>>if __name__ == "__main__": >>> t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) >>> t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) >>> fmt="%-10s:\t"+"%10.6f" >>> print fmt%('dist_beca', t1) >>> print fmt%('dist_loehner1', t8) >>> >>> >>> >>> >>>_______________________________________________ >>>Numpy-discussion mailing list >>>Num...@li... >>>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>> >>> >>> >>> >>> >>_______________________________________________ >>Numpy-discussion mailing list >>Num...@li... >>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> >> >> > > > > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Tim H. <tim...@co...> - 2006-06-19 20:29:10
|
Sebastian Beca wrote: >I just ran Alan's script and I don't get consistent results for 100 >repetitions. I boosted it to 1000, and ran it several times. The >faster one varied alot, but both came into a ~ +-1.5% difference. > >When it comes to scaling, for my problem(fuzzy clustering), N is the >size of the dataset, which should span from thousands to millions. C >is the amount of clusters, usually less than 10, and K the amount of >features (the dimension I want to sum over) is also usually less than >100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, >N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results >were: >dist_beca: 1.1, 4.5, 16, 28, 37 >dist_loehner1: 1.7, 6.5, 22, 35, 47 > >I also tried scaling across K, with C=3, N=2500, and K=5-50. I >couldn't get any consistent results for small K, but both tend to >perform as well (+-2%) for large K (K>15). > >I'm not sure how these work in the backend so I can't argument as to >why one should scale better than the other. > > The reason I suspect that dist_beca should scale better is that dist_loehner1 generates an intermediate array of size NxCxK, while dist_beca produces intermediate matrices that are only NxK or CxK. For large problems, allocating that extra memory and fetching it into and out of the cache can be a bottleneck. Here's another version that allocates even less in the way of temporaries at the expenses of being borderline incomprehensible. It still allocates an NxK temporary array, but it allocates it once ahead of time and then reuses it for all subsequent calculations. Your welcome to use it, but I'm not sure I'd recomend it unless this function is really a speed bottleneck as it could end up being hard to read later (I left implementing the N<C case as an exercise to the reader....). I have another idea that might reduce the memory overhead still further, if I get a chance I'll try it out and let you know if it results in a further speed up. -tim def dist2(A, B): d = zeros([N, C], dtype=float) if N < C: raise NotImplemented else: tmp = empty([N, K], float) tmp0 = tmp[:,0] rangek = range(1,K) for j in range(C): subtract(A, B[j], tmp) tmp *= tmp for k in rangek: tmp0 += tmp[:,k] sqrt(tmp0, d[:,j]) return d >Regards, > >Sebastian. > >On 6/19/06, Alan G Isaac <ai...@am...> wrote: > > >>On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: >> >> >> >>>Alan G Isaac wrote: >>> >>> >>>>On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>>> >>>> >>>>>def dist(): >>>>>d = zeros([N, C], dtype=float) >>>>>if N < C: for i in range(N): >>>>>xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>>>>return d >>>>>else: >>>>>for j in range(C): >>>>>xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>>>>return d >>>>> >>>>> >>>>But that is 50% slower than Johannes's version: >>>> >>>> >>>>def dist_loehner1(): >>>> d = A[:, newaxis, :] - B[newaxis, :, :] >>>> d = sqrt((d**2).sum(axis=2)) >>>> return d >>>> >>>> >>>Are you sure about that? I just ran it through timeit, using Sebastian's >>>array sizes and I get Sebastian's version being 150% faster. This >>>could well be cache size dependant, so may vary from box to box, but I'd >>>expect Sebastian's current version to scale better in general. >>> >>> >>No, I'm not sure. >>Script attached bottom. >>Most recent output follows: >>for reasons I have not determined, >>it doesn't match my previous runs ... >>Alan >> >> >> >>>>>execfile(r'c:\temp\temp.py') >>>>> >>>>> >>dist_beca : 3.042277 >>dist_loehner1: 3.170026 >> >> >>################################# >>#THE SCRIPT >>import sys >>sys.path.append("c:\\temp") >>import numpy >>from numpy import * >>import timeit >> >> >>K = 10 >>C = 2500 >>N = 3 # One could switch around C and N now. >>A = numpy.random.random( [N, K] ) >>B = numpy.random.random( [C, K] ) >> >># beca >>def dist_beca(): >> d = zeros([N, C], dtype=float) >> if N < C: >> for i in range(N): >> xy = A[i] - B >> d[i,:] = sqrt(sum(xy**2, axis=1)) >> return d >> else: >> for j in range(C): >> xy = A - B[j] >> d[:,j] = sqrt(sum(xy**2, axis=1)) >> return d >> >>#loehnert >>def dist_loehner1(): >> # drawback: memory usage temporarily doubled >> # solution see below >> d = A[:, newaxis, :] - B[newaxis, :, :] >> # written as 3 expressions for more clarity >> d = sqrt((d**2).sum(axis=2)) >> return d >> >> >>if __name__ == "__main__": >> t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) >> t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) >> fmt="%-10s:\t"+"%10.6f" >> print fmt%('dist_beca', t1) >> print fmt%('dist_loehner1', t8) >> >> >> >> >>_______________________________________________ >>Numpy-discussion mailing list >>Num...@li... >>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> > > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Sebastian B. <seb...@gm...> - 2006-06-19 20:04:36
|
I just ran Alan's script and I don't get consistent results for 100 repetitions. I boosted it to 1000, and ran it several times. The faster one varied alot, but both came into a ~ +-1.5% difference. When it comes to scaling, for my problem(fuzzy clustering), N is the size of the dataset, which should span from thousands to millions. C is the amount of clusters, usually less than 10, and K the amount of features (the dimension I want to sum over) is also usually less than 100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results were: dist_beca: 1.1, 4.5, 16, 28, 37 dist_loehner1: 1.7, 6.5, 22, 35, 47 I also tried scaling across K, with C=3, N=2500, and K=5-50. I couldn't get any consistent results for small K, but both tend to perform as well (+-2%) for large K (K>15). I'm not sure how these work in the backend so I can't argument as to why one should scale better than the other. Regards, Sebastian. On 6/19/06, Alan G Isaac <ai...@am...> wrote: > On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: > > > Alan G Isaac wrote: > > >> On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: > > >>> def dist(): > >>> d = zeros([N, C], dtype=float) > >>> if N < C: for i in range(N): > >>> xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) > >>> return d > >>> else: > >>> for j in range(C): > >>> xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) > >>> return d > > >> But that is 50% slower than Johannes's version: > > >> def dist_loehner1(): > >> d = A[:, newaxis, :] - B[newaxis, :, :] > >> d = sqrt((d**2).sum(axis=2)) > >> return d > > > Are you sure about that? I just ran it through timeit, using Sebastian's > > array sizes and I get Sebastian's version being 150% faster. This > > could well be cache size dependant, so may vary from box to box, but I'd > > expect Sebastian's current version to scale better in general. > > No, I'm not sure. > Script attached bottom. > Most recent output follows: > for reasons I have not determined, > it doesn't match my previous runs ... > Alan > > >>> execfile(r'c:\temp\temp.py') > dist_beca : 3.042277 > dist_loehner1: 3.170026 > > > ################################# > #THE SCRIPT > import sys > sys.path.append("c:\\temp") > import numpy > from numpy import * > import timeit > > > K = 10 > C = 2500 > N = 3 # One could switch around C and N now. > A = numpy.random.random( [N, K] ) > B = numpy.random.random( [C, K] ) > > # beca > def dist_beca(): > d = zeros([N, C], dtype=float) > if N < C: > for i in range(N): > xy = A[i] - B > d[i,:] = sqrt(sum(xy**2, axis=1)) > return d > else: > for j in range(C): > xy = A - B[j] > d[:,j] = sqrt(sum(xy**2, axis=1)) > return d > > #loehnert > def dist_loehner1(): > # drawback: memory usage temporarily doubled > # solution see below > d = A[:, newaxis, :] - B[newaxis, :, :] > # written as 3 expressions for more clarity > d = sqrt((d**2).sum(axis=2)) > return d > > > if __name__ == "__main__": > t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) > t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) > fmt="%-10s:\t"+"%10.6f" > print fmt%('dist_beca', t1) > print fmt%('dist_loehner1', t8) > > > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: <bh...@de...> - 2006-06-19 19:15:16
|
"George Nurser" <gn...@go...> writes: > I have run into a strange problem with the current numpy/f2py (f2py > 2_2631, numpy 2631). > I have a file [Wright.f] which contains 5 different fortran > subroutines. Arguments have been specified as input or output by > adding cf2py intent (in), (out) etc. > > Doing > f2py -c Wright.f -m Wright.so simply try=20 f2py -c Wright.f -m Wright instead. Python extension modules require the an exported routine named init<module name> (initWright in this case). But you told f2py to generate an extension module named "so" in a package named "Wright", so the generated function is named initso. The *.so file cannot be renamed because then there is no more matching init function anymore. Regards Berthold --=20 ber...@xn... / <http://h=C3=B6llmanns.de/> bh...@we... / <http://starship.python.net/crew/bhoel/> |
From: <jk...@to...> - 2006-06-19 16:35:07
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=gb2312"> <title>无标题文档</title> <style type="text/css"> <!-- .td { font-size: 12px; color: #313131; line-height: 20px; font-family: "Arial", "Helvetica", "sans-serif"; } --> </style> </head> <body leftmargin="0" background="http://bo.sohu.com//images/img20040502/dj_bg.gif"> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td height="31" background="http://igame.sina.com.cn/club/images/topmenu/topMenu_8.gif" class="td"><div align="center"><font color="#FFFFFF">主办单位:易腾企业管理咨询有限公司</font></div></td> </tr> </table> <br> <table width="684" border="0" align="center" cellpadding="0" cellspacing="0" height="1171"> <tr> <td height="71" bgcolor="#8C8C8C"> <div align="center"> <table width="100%" border="0" cellspacing="1" cellpadding="0" height="76"> <tr> <td height="74" bgcolor="#F3F3F3"><div align="center"> <span lang="zh-cn"><font size="6">采购成本管理与双赢谈判技巧</font></span></div></td> </tr> </table> </div></td> </tr> <tr> <td height="1095" bgcolor="#FFFFFF"> <div align="center"> <table width="99%" border="0" cellspacing="0" cellpadding="0" height="48"> <tr> <td width="17%" height="22" bgcolor="#BF0000" class="td"> <div align="center"><font color="#FFFFFF">[课 程 背 景]</font></div></td> <td width="83%" class="td" height="22"> </td> </tr> <tr> <td height="26" colspan="2" class="td"> <p ALIGN="JUSTIFY"><font LANG="ZH-CN"> <font size="2"> " 成本 " 是采购人员心里 " 永远的痛 " <br> ,采购人员每年在做降价工作,但企业为了控制库存,采购周期越来越短、采购批量越来越小,供应商怨声载道,加上原材料的价格不断上涨,降价的工作越来越富有挑战。通过对本课程的学习 <br> , 学员可以了解现代采购管理的发展趋势 , <br> 改善企业的采购组织以及采购流程的设定,完善供应商管理体系,提升采购谈判能力。从而帮助采购人员选择最佳供应商和采购策略,确保采购工作高质量、高效率及低成本执行,使企业具有最佳的供货状态,同时与供应商保持良好的战略伙伴关系。 </font> </font></td> </tr> </table> </div> <div align="center" style="width: 671; height: 1"> </div> <div align="center"> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[课 程 大 纲]</font></div></td> <td width="83%" class="td"> </td> </tr> <tr> <td height="64" colspan="2" class="td"> <p> <font size="2"><b>一、采购管理的新型理念</b><br> 从 " 纵向一体化 " 转向 " 横向一体化 " 管理<br> 从职能管理转向过程管理<br> 从采购管理转向供应管理<br> 从企业间交易性管理转向关系性管理<br> 从零和竞争转向多赢竞争<br> 从简单的多元化经营转向核心竞争力管理<br> <br> <b>二、采购组织设计与流程优化</b><br> 供应链管理管理模式下采购职能的新定位<br> 从采购员到采购工程师的角色转换<br> 采购部门与 PMC 生产计划 / 物料控制部门的高效率 权项分立 关系<br> 采购流程设计的原则<br> 如何 优化采购流程缩短事务型工作的处理周期<br> <br> <b>三、 供应商选择与管理</b><br> 如何依据公司发展 , 销售目标制定供应商需求体系<br> 供应商开发与认可程序<br> 采购商业体系、质量体系的构建<br> 供应商选择与评价的考评因素<br> 供应商商业、服务、质量审核要素<br> 批量生产中供应商的日常管理<br> 供应商定期评估、等级划分与双赢模式建立<br> ◆案例分析<br> <br> <b>四、如何管理供应商</b><br> 供应商之交期管理( Delivery )<br> 供应商品质管理( Quality )<br> 供应商成本管理( Cost )<br> 供应商服务管理( Service )<br> 如何管理主要供应商管理<br> 建立与策略性供应商的伙伴关系<br> 如何管理单一供应商<br> 如何实施 供应商的早期参与<br> ◆案例研讨<br> <br> <b>五、如何进行供应商绩效评估</b><br> 建立供应商绩效考核标准<br> 供应商绩效分析<br> 供应商绩效考评的关键指标如何进行供应商绩效评估<br> 如何奖励优秀供应商,促进其继续进步<br> 如何淘汰不良供应商<br> 如何协助供应商改善绩效<br> 如何进行供应商发展<br> ◆案例研讨<br> <br> <b>六、采购核心价值与采购成本控制</b><br> 成本导向采购管理的目标<br> 成本导向采购管理的成功要素<br> 主动采购取代被动采购<br> 如何核算采购 总成本,正确决策<br> 采购条款及与供应商关系策略<br> 采购中的成本影响因素分析<br> 供应商通常依据哪些要素进行报价<br> 如何分析供应商的价格市场定位与走向<br> 如何运用价格分析工具来分析报价<br> 价值分析 / 价值评价( VA/VE )<br> 如何实施有效的招标<br> 小批量、多批次和大批量、少批次的供货成本权衡<br> 如何根据物资类别建立低成本的供应合作关系<br> ◆ 案例研讨<br> <br> <b>七、采购谈判的步骤与结构</b><br> 计划准备阶段:如何进行谈判方案设计<br> 谈判开始阶段:该阶段特点分析及对策<br> 谈判过渡阶段:该阶段特点分析及对策<br> 实质性谈判阶段:该阶段特点分析及对策<br> 交易明确阶段:该阶段特点分析及对策<br> 谈判结束阶段:谈判总过程回顾;终局性让步;拟定协议<br> 模拟谈判(确立角色,设立目标、策略实施)<br> <b>八、 采购谈判技巧</b><br> 如何掌握卖方真实的销售心理<br> 如何分析销售方的需求<br> 如何运用技术分析手段实现“不谈”的谈判<br> 如何利用买卖双方的优劣进行谈判<br> 如何利用各级别的权限进行议价<br> 买方占优势时应采用何种采购策略<br> 卖方占优势时应采用何种采购策略<br> 现货采购时的谈判策略<br> 订单式供应时的谈判策略<br> ◆ 案例演示<br> <br> <b>九、 采购绩效管理的实施与评估</b><br> 通过改善供应商绩效来提升采购绩效表现<br> 通过供应商队伍的精简来提升采购绩效表现<br> 通过采购职能的整合提升采购绩效表现<br> 通过有效的成本管理方法提升采购成本竞争力<br> 通过研发设计工作模式的转变缩短产品开发周期<br> 通过新的供应链管理技术实现材料库存合理化<br> 采购绩效年度总结<br> <b>十 、综合案例分析与实际问题解答</b></font><br> </p></td> </tr> </table> <table width="99%" height="168" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="26" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[导 师 简 介]</font></div></td> <td width="83%" class="td" height="26"> </td> </tr> <tr> <td height="142" colspan="2" class="td"> <p> <font color="#FF0000"> </font><font size="2">jack.wu, 易腾管理顾问高级顾问师、专业课程讲授专家。物流与供应培训顾问项目经理,澳大利亚梅铎大学MBA。曾担任IBM中国区采购总监。吴老师在外资企业的供应链管理领域具有十多年的工作经验,结合多年的实际操作经验和丰富的管理咨询经验,吴老师特别针对行业的物流、供应链管理领域的多项主题,精心设计了各类培训课程。曾经讲授及辅导过的企业有:IBM、TDK、松下、联想、华为、汇源果汁、可口可乐、Cadbury(吉百利)、富士高集团、顺德美的空调、厦华集团、汉高、中原油田、中国万达集团、中国铝业、北汽福田、NEC东金电子、步步高电子、太原钢铁集团、PHILIPS、深圳开发科技、大冷王运输制冷、三洋华强、TCL、西安杨森等,并辅导家乐福、林德叉车、Alcon等十余家企业建立采购与物流系统。</font><font color="#ff0000"> </font> </p></td> </tr> </table> </div> <div align="center"> <table width="669" border="0" cellpadding="0" cellspacing="0" height="68"> <tr> <td width="132" height="25" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[时间/地点/联系方式]</font></div></td> <td width="536" class="td" height="25"> </td> </tr> <tr> <td height="43" colspan="2" class="td" width="669"> <p><font size="2"><b>时间:</b> 7月1-2日(周六/日) <font color="#FF0000"> 地点:</font>苏州 8- 9日(周六/日) <font color="#FF0000">地点:</font>北京 1800/人 四人以上参加,赠予一名名额</font></p> </td> </tr> </table> </div> <table width="99%" height="45" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="25" class="td"> <font size="2"><b>报名/咨询:</b><font color="#000000"> ( 0 2 1- 5 1 1 8 7 1 2 6 ) </font>谢小姐 <br> 注:如您不需要此邮件,请发送邮件至: <a href="mailto:xl...@to...">ts...@to...</a> 并在标题注明订退</font></td> </tr> </table> </td> </tr> </table> </body> </html> |