Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
Hi everyone, I tried with elevation data in the same folder as anuga and it worked but when it tried to access the elevation data from a different folder (different drive), it is not working. Is it not possible with anuga? Please help with your suggestions.
It is good that you verified that it works when your elevation data is in the same folder as ANUGA - then we know things work.
There is no reason why ANUGA can't work with data anywhere on your file system and, indeed, all you need to do is to provide a valid absolute pathname such as /home/ambflw/data/elevation.asc or E:\data\elevation.asc or whatever it is.
Can you let us know what the pathname is?
Thank you Ole, now its working. Actually I was trying to modify caching.py for readlines function because it needs high RAM and i was trying to find some other methods so that i can read large data files on my laptop. The pathname was correct. Thank you for the suggestions.
Can anyone please tell me how to deal with MemoryError for readilines(). For converting asc2dem, it is not able to read big files even after i am using a high memory system.
What is this error:
"MemoryError for readilines(). "
Can you show us?
How much memory do you have and how many rows, columns does the ASC file have?
The exact error is:
"Traceback (most recent call last):
File "C:\Users\ambuj\AppData\Local\VirtualStore\Program Files (x86)\Python25\Lib\site-packages\anuga\ausflow.py", line 30, in <module>
anuga.asc2dem(file_name+'.asc', use_cache=True, verbose=True)
File "C:\Program Files\Python25\lib\site-packages\anuga\file_conversion\asc2dem.py", line 49, in asc2dem
File "C:\Program Files\Python25\lib\site-packages\anuga\caching\caching.py", line 379, in cache
T = my_F(*args, **kwargs) # Built-in 'apply' deprecated in Py3K
File "C:\Program Files\Python25\lib\site-packages\anuga\file_conversion\asc2dem.py", line 122, in _convert_dem_from_ascii2netcdf
lines = datafile.readlines()
I am using the server of my institute and the memory is quite high around 20GB or something. The size of .asc is 14.9GB. I even tried with smaller data (around 7GB) but still the same error. I dont know why i am getting this error, even readlines() loads the whole file at once.
Thanks for the details. I agree that this is strange. You are righth that the function readlines() will read the entire file into memory so this could either be a problem with Python or that other structures already use up memory.
Is there anyway you could try to write few lines of Python that would open this file and call readlines() in isolation?
Just to check.
If indeed this is taking up too much memory it would appear that the conversion from asc to the NetCDF (dem) format should be done in blocks - i.e. by reading say 1000 lines at a time, then writing to the NetCDF.
After all this is just a conversion before ANUGA can get started.
Sorry I can't help more
I tried to read the file separately and it is not working. Is there any other way to convert asc to pts forma without using python?
Ole, I would like to know how do you cope with this problem, because i think, when you validate anuga, you would also use large amount of data. The problem is not only with conversion function but also in following steps for example:
Applying fitted data to domain
Traceback (most recent call last):
File "C:\Users\ambuj\AppData\Local\VirtualStore\Program Files\Python25\Lib\site-packages\anuga\s19oct.py", line 73, in <module>
G1 = Geospatial_data(file_name = 'F:/Ambuj/modeltest/9oct/small1' + '.pts')
File "C:\Program Files\Python25\lib\site-packages\anuga\geospatial_data\geospatial_data.py", line 176, in __init__
File "C:\Program Files\Python25\lib\site-packages\anuga\geospatial_data\geospatial_data.py", line 557, in import_points_file
File "C:\Program Files\Python25\lib\site-packages\anuga\geospatial_data\geospatial_data.py", line 991, in _read_pts_file
pointlist = num.array(fid.variables)
I cannot recall having had this problem when we did validation. Perhaps others might chip in here. From your postings you have memory problem both when you convert from ASC to DEM and when you read an existing NetCDF PTS file. And both fail when you run them in isolation - i.e. outside ANUGA when nothing else big running on your system, right?
In the former case, it looks like you'll have to read a limited number of lines at a time, then store them to NetCDF. ASC to DEM is just converting form text to NetCDF so no fancy processing is required. The code could even change from
lines = datafile.readlines()
for line in lines:
for line in datafile.readline():
as that will read one at a time.
As for memory error when reading the existing .pts file, can you try that in isolation and let me know?
Yes I tried in isolation, both the reading of .asc and .pts file but it is not working and still showing memory error. Is there any other way to convert a raster or .asc file to .pts file. May be i can try to get it work by selecting smaller areas of interest once i have the .pts file.
OK - then it is simply that the data is too big for the memory.
The best solution to the ASC2DEM question would be to write the conversion script to not read all the lines at one time.
As for reading the PTS it would have to be something similar.
Other options include:
Go for a smaller area:
See if you can grid your points data using another package
How big are the files again?
We can also put the question to the mailing list: email@example.com
Sorry I can't give you a quick fix