You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Francesc A. <fa...@ca...> - 2006-08-18 12:08:11
|
Hi, I'm starting to (slowly) replace numarray by NumPy at the core of PyTables,= =20 specially at those places where the speed of NumPy is *much* better, that i= s,=20 in the creation of arrays (there are places in PyTables where this is=20 critical, most specially in indexation) and in copying arrays. In both case= s,=20 NumPy performs between 8x to 40x than numarray and this is, well...,=20 excellent :-) Also, the big unification between numerical homogeneous arrays, string=20 homogeneous arrays (with unicode support added) and heterogeneous arrays=20 (recarrays, with nested records support there also!) is simplyfying very mu= ch=20 the code in PyTables where there are many places where one have to=20 distinguish between those different objects in numarray. Fortunately, this= =20 distinction is not necessary anymore in many of this places. =46urthermore, I'm seeing that most of the corner cases where numarray do w= ell=20 (this was the main reason I was conservative about migrating anyway), are=20 also very well resolved in NumPy (in some cases better, as for one, NumPy h= as=20 chosen NULL terminated strings for internal representation, instead of spac= e=20 padding in numarray that gave me lots of headaches). Of course, there are=20 some glitches that I'll report appropriately, but overall, NumPy is behavin= g=20 better than expected (and I already had *great* expectations). Well, I just wanted to report these experiences just in case other people i= s=20 pondering about migrating as well to NumPy. But also wanted to thanks (once= =20 more), the excellent work of the NumPy crew, and specially Travis for their= =20 first-class work. Thanks! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: David C. <da...@ar...> - 2006-08-18 11:35:26
|
Albert Strasheim wrote: > Hello all > > >> -----Original Message----- >> From: num...@li... [mailto:numpy- >> dis...@li...] On Behalf Of David Cournapeau >> Sent: 18 August 2006 06:55 >> To: Discussion of Numerical Python >> Subject: [Numpy-discussion] ctypes: how does load_library work ? >> >> <snip> >> That works OK, but to avoid the platform dependency, I would like to use >> load_library from numpy: I just replace the cdll.LoadLibrary by : >> >> _hello = N.ctypeslib.load_library('hello', '.') >> >> which does not work. The python interpreter returns a strange error >> message, because it says hello.so.so is not found, and it is looking for >> the library in the directory usr/$(PWD), which does not make sense to >> me. Is it a bug, or am I just not understanding how to use the >> load_library function ? >> > > load_library currently assumes that library names don't have a prefix. We > might want to rethink this assumption on Linux and other Unixes. > I think it needs to be modified for linux and Solaris at least, where the prefix lib is put in the library name. When linking, you use -lm, and not -llibm. In dlopen, you use the full name (libm.so). After a quick look at ctypes reference doc, it looks like there are some function to search a library, maybe this can be used ? Anyway, this is kind of nickpicking, as ctypes is really a breeze to use. To be able to do the whole wrapping in pure python is great, thanks ! David |
From: David C. <da...@ar...> - 2006-08-18 11:29:31
|
Stefan van der Walt wrote: > On Fri, Aug 18, 2006 at 01:54:44PM +0900, David Cournapeau wrote: > >> import numpy as N >> from ctypes import cdll, POINTER, c_int, c_uint >> >> _hello = cdll.LoadLibrary('libhello.so') >> >> _hello.sum.restype = c_int >> _hello.sum.artype = [POINTER(c_int), c_uint] >> >> def sum(data): >> return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) >> >> n = 10 >> data = N.arange(n) >> >> print data >> print "sum(data) is " + str(sum(data)) >> >> >> That works OK, but to avoid the platform dependency, I would like to use >> load_library from numpy: I just replace the cdll.LoadLibrary by : >> >> _hello = N.ctypeslib.load_library('hello', '.') >> > > Shouldn't that be 'libhello'? Try > > _hello = N.ctypes_load_library('libhello','__file__') > Well, the library name convention under unix, as far as I know, is 'lib'+ name + '.so' + 'version'. And if I put lib in front of hello, it then does not work under windows. David |
From: Albert S. <fu...@gm...> - 2006-08-18 10:40:05
|
Hello all > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of David Cournapeau > Sent: 18 August 2006 06:55 > To: Discussion of Numerical Python > Subject: [Numpy-discussion] ctypes: how does load_library work ? > > <snip> > That works OK, but to avoid the platform dependency, I would like to use > load_library from numpy: I just replace the cdll.LoadLibrary by : > > _hello = N.ctypeslib.load_library('hello', '.') > > which does not work. The python interpreter returns a strange error > message, because it says hello.so.so is not found, and it is looking for > the library in the directory usr/$(PWD), which does not make sense to > me. Is it a bug, or am I just not understanding how to use the > load_library function ? load_library currently assumes that library names don't have a prefix. We might want to rethink this assumption on Linux and other Unixes. load_library's second argument is a filename or a directory name. If it's a directory, load_library looks for hello.<platformspecificextension> in that directory. If it's a filename, load_library calls os.path.dirname to get a directory. The idea with this is that in a module you'll probably have one file that loads the library and sets up argtypes and restypes and here you'll do (in mylib.py): _mylib = numpy.ctypeslib.load_library('mylib_', __file__) and then the library will be installed in the same directory as mylib.py. Better suggestions for doing all this appreciated. ;-) Cheers, Albert |
From: Albert S. <fu...@gm...> - 2006-08-18 10:31:10
|
Hello all > <snip> > I decided to upgrade to 1.0b2 just to see what I get and now I get 7kB of > "possibly lost" memory, coming from PyObject_Malloc (in > /usr/lib/libpython2.4.so.1.0). This is a constant 7kB, however, and it > isn't getting any larger if I increase the loop iterations. Looks good > then. I don't really know the meaning of this "possibly lost" memory. http://projects.scipy.org/scipy/numpy/ticket/195 This leak is caused by add_docstring, but it's supposed to leak. I wonder if there's a way to register some kind of on-exit handler in Python so that this can also be cleaned up? Cheers, Albert |
From: Stefan v. d. W. <st...@su...> - 2006-08-18 09:17:09
|
On Fri, Aug 18, 2006 at 01:54:44PM +0900, David Cournapeau wrote: > import numpy as N > from ctypes import cdll, POINTER, c_int, c_uint >=20 > _hello =3D cdll.LoadLibrary('libhello.so') >=20 > _hello.sum.restype =3D c_int > _hello.sum.artype =3D [POINTER(c_int), c_uint] >=20 > def sum(data): > return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) >=20 > n =3D 10 > data =3D N.arange(n) >=20 > print data > print "sum(data) is " + str(sum(data)) >=20 >=20 > That works OK, but to avoid the platform dependency, I would like to us= e=20 > load_library from numpy: I just replace the cdll.LoadLibrary by : >=20 > _hello =3D N.ctypeslib.load_library('hello', '.') Shouldn't that be 'libhello'? Try _hello =3D N.ctypes_load_library('libhello','__file__') Cheers St=E9fan |
From: Bill B. <wb...@gm...> - 2006-08-18 08:06:49
|
Thanks for the info Nils. Sounds like it was fixed post-1.0b1. Good news. And Trac seems to be letting me in again. Not sure what was wrong there. --bb On 8/18/06, Nils Wagner <nw...@ia...> wrote: > > Bill Baxter wrote: > > If you do this: > > >>> numpy.linalg.eig(numpy.random.rand(3,3)) > > > > You'll (almost always) get a wrong answer back from numpy. Something > > like: > > > > (array([ 1.72167898, -0.07251007, -0.07251007]), > > array([[ 0.47908847, 0.72095163, 0.72095163], > > [ 0.56659142, -0.46403504, -0.46403504], > > [ 0.67040914, 0.01361572, 0.01361572]])) > > > > The return value should be complex (unless rand() just happens to > > return something symmetric). > > > > It really needs to either throw an exception, or preferably for this > > function, just go ahead and return something complex, like the > > numpy.dft functions do. > > On the other hand it, would be nice to stick with plain doubles if the > > output isn't complex, but I'd rather get the right output all the time > > than get the minimal type that will handle the output. > > > > This is with beta 1. > > > > Incidentally, I tried logging into the Trac here: > > http://projects.scipy.org/scipy/scipy > > to file a bug, but it wouldn't let me in under the account I've been > > using for a while now. Is the login system broken? Were passwords > > reset or something? > > > > > > --bb > > > > - > > AFAIK this problem is fixed. > > http://projects.scipy.org/scipy/numpy/ticket/215 > > I have no problem wrt the Trac system. > > Nils > > -discussion > |
From: Nils W. <nw...@ia...> - 2006-08-18 07:23:33
|
Bill Baxter wrote: > If you do this: > >>> numpy.linalg.eig(numpy.random.rand(3,3)) > > You'll (almost always) get a wrong answer back from numpy. Something > like: > > (array([ 1.72167898, -0.07251007, -0.07251007]), > array([[ 0.47908847, 0.72095163, 0.72095163], > [ 0.56659142, -0.46403504, -0.46403504], > [ 0.67040914, 0.01361572, 0.01361572]])) > > The return value should be complex (unless rand() just happens to > return something symmetric). > > It really needs to either throw an exception, or preferably for this > function, just go ahead and return something complex, like the > numpy.dft functions do. > On the other hand it, would be nice to stick with plain doubles if the > output isn't complex, but I'd rather get the right output all the time > than get the minimal type that will handle the output. > > This is with beta 1. > > Incidentally, I tried logging into the Trac here: > http://projects.scipy.org/scipy/scipy > to file a bug, but it wouldn't let me in under the account I've been > using for a while now. Is the login system broken? Were passwords > reset or something? > > > --bb > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > AFAIK this problem is fixed. http://projects.scipy.org/scipy/numpy/ticket/215 I have no problem wrt the Trac system. Nils |
From: Joris De R. <jo...@st...> - 2006-08-18 07:23:31
|
Hi, In the README.txt of the numpy installation it says that one could use a si= te.cfg file to=20 specify non-standard locations of ATLAS en LAPACK libraries, but it doesn't= explain how.=20 I have a directory software/atlas3.6.0/lib/Linux_PPROSSE2/ which contains libcombinedlapack.a libatlas.a libcblas.a libf77blas.a liblapack= =2Ea libtstatlas.a where liblapack.a are the few lapack routines provided by ATLAS, and libcom= binedlapack.a (> 5 MB) contains the full LAPACK library including the few optimized routi= nes of ATLAS. =46rom the example in numpy/distutils/system_info.py I figured that my site= =2Ecfg file should look like =2D-- site.cfg --- [atlas] library_dirs =3D /software/atlas3.6.0/lib/Linux_PPROSSE2/ atlas_libs =3D combinedlapack, f77blas, cblas, atlas =2D-------------- However, during numpy installation, he says: =46OUND: libraries =3D ['combinedlapack', 'f77blas', 'cblas', 'atlas'] library_dirs =3D ['/software/atlas3.6.0/lib/Linux_PPROSSE2/'] which is good, but afterwards he also says: Lapack library (from ATLAS) is probably incomplete: size of /software/atlas3.6.0/lib/Linux_PPROSSE2/liblapack.a is 305k (ex= pected >4000k) which he shouldn't use at all. Strangely enough, renaming libcombinedlapack= =2Ea to liblapack.a and adapting the site.cfg file accordingly still gives the same message. Any pointers? Joris |
From: David C. <da...@ar...> - 2006-08-18 04:47:51
|
Hi, I am investigating the use of ctypes to write C extensions for numpy/scipy. First, thank you for the wiki, it makes it easy to implement in a few minutes a wrapper for a C function taking arrays as arguments. I am running recent SVN version of numpy and scipy, and I couldn't make load_library work as I expected: Let's say I have a libhello.so library on linux, which contains the C function int sum(const int* in, size_t n). To wrap it, I use: import numpy as N from ctypes import cdll, POINTER, c_int, c_uint _hello = cdll.LoadLibrary('libhello.so') _hello.sum.restype = c_int _hello.sum.artype = [POINTER(c_int), c_uint] def sum(data): return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) n = 10 data = N.arange(n) print data print "sum(data) is " + str(sum(data)) That works OK, but to avoid the platform dependency, I would like to use load_library from numpy: I just replace the cdll.LoadLibrary by : _hello = N.ctypeslib.load_library('hello', '.') which does not work. The python interpreter returns a strange error message, because it says hello.so.so is not found, and it is looking for the library in the directory usr/$(PWD), which does not make sense to me. Is it a bug, or am I just not understanding how to use the load_library function ? David |
From: Bill B. <wb...@gm...> - 2006-08-18 04:13:11
|
If you do this: >>> numpy.linalg.eig(numpy.random.rand(3,3)) You'll (almost always) get a wrong answer back from numpy. Something like: (array([ 1.72167898, -0.07251007, -0.07251007]), array([[ 0.47908847, 0.72095163, 0.72095163], [ 0.56659142, -0.46403504, -0.46403504], [ 0.67040914, 0.01361572, 0.01361572]])) The return value should be complex (unless rand() just happens to return something symmetric). It really needs to either throw an exception, or preferably for this function, just go ahead and return something complex, like the numpy.dftfunctions do. On the other hand it, would be nice to stick with plain doubles if the output isn't complex, but I'd rather get the right output all the time than get the minimal type that will handle the output. This is with beta 1. Incidentally, I tried logging into the Trac here: http://projects.scipy.org/scipy/scipy to file a bug, but it wouldn't let me in under the account I've been using for a while now. Is the login system broken? Were passwords reset or something? --bb |
From: David G. <dav...@gm...> - 2006-08-18 00:08:33
|
On 8/17/06, Robert Kern <rob...@gm...> wrote: > > David Grant wrote: > > Hello all, > > > > I had a massive memory leak in some of my code. It would basically end > > up using up all 1GB of my RAM or more if I don't kill the application. I > > managed to finally figure out which portion of the code was causing the > > leak (with great difficulty) and have a little example which exposes the > > leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is > > already fixed in 1.0b2. Run this through valgrind with appropriate > > options (I used the recommended valgrind_py.sh that I found on scipy's > > site somewhere) and this will leak 100kB. Increase the xrange on the big > > loop and you can watch the memory increase over time in top. > > I don't see a leak in 1.0b2.dev3002. Thanks Robert. I decided to upgrade to 1.0b2 just to see what I get and now I get 7kB of "possibly lost" memory, coming from PyObject_Malloc (in /usr/lib/libpython2.4.so.1.0). This is a constant 7kB, however, and it isn't getting any larger if I increase the loop iterations. Looks good then. I don't really know the meaning of this "possibly lost" memory. -- David Grant http://www.davidgrant.ca |
From: <nu...@us...> - 2006-08-17 23:51:21
|
<div align="1eft"><b><font s1ze="5"> Want the degree but can’t f1nd the t1me?</font></b><BR> <BR> WH@T @ GREAT IDEA!<BR> We provide a concept that w11l allow anyone with sufficient work experience to obtain a fully verifiab1e University Degree.<BR> BacheIors, M@$ters 0r even a Doctorate.<BR> Think 0f it, with1n four to s1x weeks, y0u t0o could be @ college gr@duate.<BR> Many pe0ple share the $ame frustration, they are all doing the work of the per$on th@t ha$ the degree and the per$on that has the degree is getting all the money.<BR> D0n’t you think th@t it 1s t1me you were p@id fair compens@tion for the leve] of w0rk you @re already do1ng?<BR> Th1s 1s your chance to fin@lly make the right move and rece1ve your due benefit$.<BR> If you are ]ike most pe0ple, you are more than qualified with your experience, but are l@cking that prestig1ous piece of paper known as a d1pl0ma that is often the p@$sport t0 $uccess.<B R> <b>CALL US T0DAY AND GIVE YOUR WORK<BR> EXPERIENCE THE CHANCE TO EARN YOU<BR> THE H|GHER COMPEN$AT|ON YOU DESERVE!</b><BR> <font col0r="#FF0033" size="5">CALL NOW:</f0nt><font color="#FF0033" size="7"><BR> <b>1-8i5-828-2222</b></font><BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> The neare$t streetl@mp went out with @ pop. He clicked the unlighter </d1v> |
From: Robert K. <rob...@gm...> - 2006-08-17 23:30:44
|
David Grant wrote: > Hello all, > > I had a massive memory leak in some of my code. It would basically end > up using up all 1GB of my RAM or more if I don't kill the application. I > managed to finally figure out which portion of the code was causing the > leak (with great difficulty) and have a little example which exposes the > leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is > already fixed in 1.0b2. Run this through valgrind with appropriate > options (I used the recommended valgrind_py.sh that I found on scipy's > site somewhere) and this will leak 100kB. Increase the xrange on the big > loop and you can watch the memory increase over time in top. I don't see a leak in 1.0b2.dev3002. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: David G. <dav...@gm...> - 2006-08-17 23:25:18
|
Hello all, I had a massive memory leak in some of my code. It would basically end up using up all 1GB of my RAM or more if I don't kill the application. I managed to finally figure out which portion of the code was causing the leak (with great difficulty) and have a little example which exposes the leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is already fixed in 1.0b2. Run this through valgrind with appropriate options (I used the recommended valgrind_py.sh that I found on scipy's site somewhere) and this will leak 100kB. Increase the xrange on the big loop and you can watch the memory increase over time in top. The interesting thing is that the only difference between the leaky and non-leaky code is: if not adjacencyMatrix[anInt2,anInt1] == 0: (leaky) vs. if not adjacencyMatrix[anInt2][anInt1] == 0: (non-leaky) however another way to make the leaky code non-leaky is to change anArrayOfInts to just be [1] Here's the code: from numpy import array def leakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix): ys = set() for aList in aListOfArrays: for anInt1 in anArrayOfInts: for anInt2 in aList: if not adjacencyMatrix[anInt2,anInt1] == 0: ys.add(anInt1) return ys def nonLeakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix): ys = set() for aList in aListOfArrays: for anInt1 in anArrayOfInts: for anInt2 in aList: if not adjacencyMatrix[anInt2][anInt1] == 0: ys.add(anInt1) return ys if __name__ == "__main__": for i in xrange(10000): aListOfArrays = [[0, 1]] anArrayOfInts = array([1]) adjacencyMatrix = array([[0,1],[1,0]]) #COMMENT OUT ONE OF THE 2 LINES BELOW #bar = nonLeakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix) bar = leakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix) -- David Grant http://www.davidgrant.ca |
From: David G. <dav...@gm...> - 2006-08-17 19:48:40
|
I'm contemplating upgrading to 1.0b2. The main reason is that I am experiencing a major memory leak and before I report a bug I think the developers would appeciate if I was using the most recent version. Am I correct in that the only major change that might actually break my code is that the following functions: take, repeat, sum, product, sometrue, cumsum, cumproduct, ptp, amax, amin, prod, cumprod, mean, std, var now have axis=None as argument? BTW, how come alter_code2.py ( http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/oldnumeric/alter_code2.py?rev=HEAD) says in the docstring that it "converts functions that don't give axis= keyword that have changed" but I don't see it actually doing that anywhere in the code? Thanks, David -- David Grant http://www.davidgrant.ca |
From: <we...@to...> - 2006-08-17 19:02:01
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=gb2312"> <title>无标题文档</title> <style type="text/css"> <!-- .td { font-size: 12px; color: #313131; line-height: 20px; font-family: "Arial", "Helvetica", "sans-serif"; } --> </style> </head> <body leftmargin="0" background="http://bo.sohu.com//images/img20040502/dj_bg.gif"> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td height="31" background="http://igame.sina.com.cn/club/images/topmenu/topMenu_8.gif" class="td"><div align="center"><font color="#FFFFFF">主办单位:易腾企业管理咨询有限公司</font></div></td> </tr> </table> <br> <table width="673" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="62" bgcolor="#8C8C8C"> <div align="center"> <table width="100%" border="0" cellspacing="1" cellpadding="0" height="69"> <tr> <td height="67" bgcolor="#F3F3F3"><div align="center"><font lang="ZH-CN" color="#FF0000" size="6"><b>生产一线主管技能提升</b></font></div></td> </tr> </table> </div></td> </tr> </table> <table width="673" border="0" align="center" cellpadding="0" cellspacing="0" class="td" height="1411"> <tr> <td height="1415" bgcolor="#FFFFFF"> <div align="center"> <table width="99%" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="17%" height="20" bgcolor="#BF0000" class="td"> <div align="center"><font color="#FFFFFF">[课 程 背 景]</font></div></td> <td width="83%" class="td"> </td> </tr> <tr> <td height="74" colspan="2" class="td"> <p ALIGN="JUSTIFY"><font LANG="ZH-CN"> </font><font lang="ZH-CN" size="2"> </font><font lang="ZH-CN" size="2"><span style="mso-bidi-font-size: 9.0pt; mso-bidi-font-family: Times New Roman; mso-font-kerning: 1.0pt; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA">对于很多的班组长来讲,从优秀的技术工人走向基层管理岗位,很多时候不得不面对:<br> 1、上级骂、下属怨,生产现场一天忙的脚朝天,现场管理依然乱糟糟....<br> 2、生产任务完不成找我,员工有情绪找我,效率、质量搞不好也找我...<br> 3、手下一管就是几十号人,做事我行,管人、带人,让这群人顺顺畅畅完成工作可就...<br> 4、你说一套,他做另外一套,他觉得他的方法还比你好,怎么个教法?<br> 5、生产线变化频繁,会这个,不会那个,人一堆,岗位却常常缺人手,怎么办?<br> 6、大错不犯,小错不断,员工老是违反纪律,怎么让员工改正?<br> 7、表现不好,说他几句,表面应承,后面跟你对着干,怎么让员工接受主管的意见?<br> 8、愣头青、老油条、刺儿头如何管,如何带?<br> 9、发生问题,主管总是最后一个知道信息,怎么跟员工沟通,让员工有效配合工作?<br> 10、我们基层主管既决定不了员工薪水,也决定不了公司环境,怎么激励手下这些人?<br> 11、生产现场我们也想搞好,可就是难以有效发现问题,在生产现场怎样培养问题意识?<br> 12、一天到晚都在处理问题,可是很多时候还是头痛医头,脚痛医脚,怎么办?<br> 一流的工厂都拥有一流的制造现场--持续提高效率、不断降低制造成本:一流的主管培养训练了一流的员工,一流员工创造了一流的现场!优秀的一线主管不仅有扎实的技术操作能力,而且有着丰富的解决问题、管理团队的"软"技能!调查显示我国大多制造企业的事实是:企业中许多承担管理职能的一线主管,大多只发挥了"技术能手"的作用,多数都是"球星出身",而没有发挥"一线教练"的作用,没能带出一个出色的团队! 如何从"技术操作型能手"转变为"一线教练型能手"--欢迎您参加本期生产管理技能培训!<br> 参加对象: 班组长、技术员</span></font></td> </tr> </table> </div> <div align="center" style="width: 671; height: 1"> </div> <div align="center"> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[课 程 大 纲]</font></div></td> <td width="83%" class="td"> </td> </tr> <tr> <td height="64" colspan="2" class="td"> <p><font size="2"><font color="#0000FF">第一篇 班组长的角色认知及其素养</font><br> 一、班组长的角色认知<br> 班组长的地位和使命<br> 班组长的重要作用<br> 班组长的素质要求<br> 认识企业管理中的各种角色<br> 角色认知--对自己和环境的分析<br> 二、班组长应学习的团队管理和沟通技术<br> 如何实施有效的信息传递<br> 如何与上级、同级沟通<br> 现场团队建设与职场沟通<br> 如何领导不同个性的人<br> 沟通游戏:学会沟通技巧<br> 团队冲突的原因分析<br> 如何提升团队战斗力<br> 三、班组成本分析与目标管理<br> 如何分析分解各项管理指标<br> 如何制定目标<br> 如何组织实施并达成目标<br> <br> <font color="#0000FF">第二篇 班组长应备的管理技能</font><br> 一、如何进行工作汇报<br> 工作汇报的含义及其方式<br> 工作汇报的主要内容<br> 工作汇报的具体要求<br> 二、如何管理下属<br> 了解下属的期望值<br> 如何化解下属的矛盾<br> 如何对待非正式群体<br> 如何批评下属<br> 三、工作教导与员工绩效改善<br> 工作现场经常出现的问题<br> 何时需要工作教导<br> 员工绩效不良的原因分析<br> 如何教导下属改善绩效<br> 讨论:培训下属误区分析<br> 提高OJT培训效果的秘诀<br> 工作教导四阶段 <br> 使用"现场多技能管理表"<br> 演练:改变下属不良行为<br> <br> <font color="#0000FF">第三篇 如何组织实施品质和效率改善</font><br> 一、工作改善的思路与方法 <br> 现场工作改善的基础:5S与目视管理<br> 工作改善基本原则和着眼点<br> 工作改善的基本方法和工具<br> 现场防呆管理(Poka Yoke)<br> 现场安全管理与改善<br> 演练:应用头脑风暴法改善工作<br> 二、发掘问题与解决问题的能力<br> 发掘问题与解决问题是企业发展的内在动力<br> 面对问题的心态<br> 面对复杂的工作--如何突破困境<br> 问题分析与解决"正确之程序<br> 案例:丰田公司解决问题的思维模式<br> 三、现场品质管理<br> 生产过程中品质如何来控制<br> 出现品质异常时如何来处理<br> 品质信息的反馈、跟进、与处理<br> 如何开展QCC品管圈活动<br> 案例:GE是如何实践"品质至上"<br> 四、IE现场改善<br> 现场改善是成本降低的基础 <br> IE现场改善方法<br> 如何挖掘成本浪费所在 <br> 如何清除人员的浪费<br> 如何降低材料损耗直接之成本<br> 改善生产工艺、提升效率、达至降低成本<br> 案例:丰田汽车JIT精益生产实际案例</font> </p></td> </tr> </table> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[导 师 简 介]</font></div></td> <td width="83%" class="td"> </td> </tr> <tr> <td height="64" colspan="2" class="td"> <p><span style="font-size:9.0pt; font-family:宋体;mso-ascii-font-family:"Times New Roman";mso-hansi-font-family: "Times New Roman";letter-spacing:0pt" class="9p1"> </span><span style="letter-spacing: 0pt" class="9p1"><font size="2"> </font><span style="letter-spacing: 0pt"><font color="#FF0000" size="3">Mr Wang</font><font size="2">,<font color="#FF0000">管理工程硕士、高级经济师,6SIGMA黑带,国际职业培训师协会认证职业培训师。</font>王先生长期推行工业工程、精益生产等先进运作方式,历任大型跨国公司生产负责人、工业工程经理、项目总监,王老师主要从事生产计划与物料控制、IE技术应用、成本控制、价值工程的讲授,先后为IBM、TDK、松下、可口可乐、康师傅、汇源果汁、雪津啤酒、吉百利食品、冠捷电子、INTEX明达塑胶、正新橡胶、美国ITT集团、广上科技、美的空调、中兴通讯、京信通信,联想电脑,应用材料(中国)公司、艾克森-金山石化、中国化工进出口公司、正大集团大福饲料、厦华集团、灿坤股份、NEC东金电子、太原钢铁集团、PHILIPS、深圳开发科技、大冷王运输制冷、三洋华强、TCL、EPSON、长安福特、泰科电子、长城计算机等知名企业提供项目辅导或专题培训。王老师授课经验丰富,风格幽默诙谐、逻辑清晰、过程互动,案例生动、深受学员喜爱</font></span></span><span lang="EN-US" style="font-size:9.0pt;letter-spacing:0pt" class="9p1"><font size="2">。</font></o:p> </span> </p></td> </tr> </table> </div> <div align="center"> <table width="667" border="0" cellpadding="0" cellspacing="0" height="60"> <tr> <td width="111" height="26" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[时间/地点/报名]</font></div></td> <td width="552" class="td" height="26"> </td> </tr> <tr> <td height="34" colspan="2" class="td" width="665"> <p><font size="2"><b><font color="#BF0000">时间</font><font color="#000000">: </font></b><font color="#000000"> 8</font>月26-27日(周六/日) <b><font color="#BF0000"> 地点</font><font color="#000000">:</font></b> 苏州 1800元/人 <font color="#BF0000"> </font>四人以上参加,赠予一名名额</font> </p> </td> </tr> </table> </div> <table width="99%" height="32" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="12" class="td"> <font size="2"><b><font color="#BF0000">报名/咨询</font><font color="#000000">:</font></b> 谢小姐 021-51187126 注: 如您需要订退,请将邮箱发送至: ee...@12...</font></td> </tr> </table> </td> </tr> </table> </body> </html> |
From: Chris K. <chr...@er...> - 2006-08-17 17:01:33
|
Hi, I just ran convertcode.py on my code (from the latest svn source of numpy) and it looks like it just changed the import statements to import numpy.oldnumeric as Numeric So it doesn't look like it's really helping me move over to the new usage. Is there a script that will converts code to use the new numpy as it's intended to be used? Thanks, Chris |
From: Alan G I. <ai...@am...> - 2006-08-17 16:29:19
|
In BibTeX format. fwiw, Alan Isaac @MANUAL{Oliphant:2006, author = {Oliphant, Travis E.}, year = 2006, title = {Guide to NumPy}, month = mar, address = {Provo, UT}, institution = {Brigham Young University} } @ARTICLE{Dubois+etal:1996, author = {Dubois, Paul F. and Konrad Hinsen and James Hugunin}, year = {1996}, title = {Numerical Python}, journal = {Computers in Physics}, volume = 10, number = 3, month = {May/June} } @ARTICLE{Dubois:1999, author = {Dubois, Paul F.}, year = 1999, title = {Extending Python with Fortran}, journal = {Computing Science and Engineering}, volume = 1, number = 5, month = {Sep/Oct}, pages = {66--73} } @ARTICLE{Scherer+etal:2000, author = {Scherer, David and Paul Dubois and Bruce Sherwood}, year = 2000, title = {VPython: 3D Interactive Scientific Graphics for Students}, journal = {Computing in Science and Engineering}, volume = 2, number = 5, month = {Sep/Oct}, pages = {56--62} } @MANUAL{Ascher+etal:1999, author = {Ascher, David and Paul F. Dubois and Konrad Hinsen and James Hugunin and Travis Oliphant}, year = 1999, title = {Numerical Python}, edition = {UCRL-MA-128569}, address = {Livermore, CA}, organization = {Lawrence Livermore National Laboratory} } |
From: Christopher H. <ch...@st...> - 2006-08-17 12:23:12
|
What happened to numpy.bool8? I realize that bool_ is just as good. I was just wondering what motivated the change? Chris |
From: Stephen W. <drs...@gm...> - 2006-08-16 23:51:27
|
On 8/16/06, kor...@id... <kor...@id...> wrote: > > all of the variables n, st, st2, st3, st4, st5, st6, sx, sxt, sxt2, and > sxt3 are all floats. > > > A = array([[N, st, st2, st3],[st, st2, st3, st4], [st2, st3, st4, st5], > [st3, st4, st5, st6]]) > B = array ([sx, sxt, sxt2, sxt3]) > lina = linalg.solve(A, B) Is your matrix A in fact singular? Without numerical values of A, st, etc., it is hard to know. |
From: Robert K. <rob...@gm...> - 2006-08-16 23:34:04
|
Yatima Meiji wrote: > I'm currently running a fresh install of Suse 10.1. I ran the numpy > setup script using "python setup.py install" and it fails with this error: > > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 89, in ? > setup_package() > File "setup.py", line 59, in setup_package > from numpy.distutils.core import setup > File "/home/xxx/numpy-1.0b2/numpy/distutils/__init__.py", line 5, in ? > import ccompiler > File "/home/xxx/numpy-1.0b2/numpy/distutils/ccompiler.py", line 6, in ? > from distutils.ccompiler import * > ImportError: No module named distutils.ccompiler > > I checked ccompiler.py to see what was wrong. I'm not much of a > programmer, but it seems strange to have ccompiler.py reference itself. It's not; it's trying to import from the standard library's distutils.ccompiler module. Suse, like several other Linux distributions, separates distutils from the rest of the standard library in a separate package which you will need to install. It will be called something like python-dev or python-devel. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Yatima M. <yat...@gm...> - 2006-08-16 23:29:15
|
I'm currently running a fresh install of Suse 10.1. I ran the numpy setup script using "python setup.py install" and it fails with this error: Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 59, in setup_package from numpy.distutils.core import setup File "/home/xxx/numpy-1.0b2/numpy/distutils/__init__.py", line 5, in ? import ccompiler File "/home/xxx/numpy-1.0b2/numpy/distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * ImportError: No module named distutils.ccompiler I checked ccompiler.py to see what was wrong. I'm not much of a programmer, but it seems strange to have ccompiler.py reference itself. I'm guessing others have compilied numpy just fine, so whats wrong with me? Thanks in advanced. -- "Physics is like sex: sure, it may give some practical results, but that's not why we do it." -- Richard P. Feynman |
From: Travis O. <oli...@ie...> - 2006-08-16 21:10:39
|
Elijah Gregory wrote: > Dear NumPy Users, > > I am attempting to install numpy-0.9.8 as a user on unix system. > When I install numpy by typing "python setup.py install" as per the > (only) instructions in the README.txt file everything proceeds > smoothly until some point where the script attempts to write a file to > the root-level /usr/lib64. How can I configure the setup.py script to > use my user-level directories which I do have access to? Also, given > that the install exited with an error, how do I clean up the aborted > installation? Is there a particular reason you are installing numpy-0.9.8? Please use the latest version as 0.9.8 is a pre-beta release. -Travis |
From: <kor...@id...> - 2006-08-16 20:54:49
|
all of the variables n, st, st2, st3, st4, st5, st6, sx, sxt, sxt2, and sxt3 are all floats. A = array([[N, st, st2, st3],[st, st2, st3, st4], [st2, st3, st4, st5], [st3, st4, st5, st6]]) B = array ([sx, sxt, sxt2, sxt3]) lina = linalg.solve(A, B) is there something wrong with this code? it is returning File "C:\PYTHON23\Lib\site-packages\numpy\linalg\linalg.py", line 138, in solve raise LinAlgError, 'Singular matrix' numpy.linalg.linalg.LinAlgError: Singular matrix Does anyone know what I am doing wrong? -Kenny |