From: Erin S. <eri...@gm...> - 2006-11-12 23:56:33
|
Hi all- I want to take the result from a database query, and create a numpy array with field names and types corresponding to the returned columns. The DBI 2.0 compliant interfaces return lists of lists. E.g. [[94137100072000193L, 94, 345.57215100000002, -0.83673208099999996], [94137100072000368L, 94, 345.60217299999999, -0.83766954299999996], .... [94137100083000157L, 94, 347.21668099999999, -0.83572582399999995], [94137100084000045L, 94, 347.45524799999998, -0.829750074]] But the only examples I have found for creating an inhomogeneous array with fields involves lists of tuples. e.g. >>> mydescriptor = {'names': ('gender','age','weight'), 'formats':('S1', >>> 'f4', 'f4')} >>> a = array([('M',64.0,75.0),('F',25.0,60.0)], dtype=mydescriptor) Trying something like this with a list of lists results in the following error: TypeError: expected a readable buffer object Now I could create the array and run a loop, copying in, but this would be less efficient. Is there a way to do this in one step? Thanks, Erin |