I kept running out of memory when running pngwalks of vaps using the new cassini HAPI servers, and I figured out that it's because data is being loaded into a cache but then is not removed. Try this:
- start new Autoplot with HAPI caching enabled.
- load vap+hapi:http://planet.physics.uiowa.edu/das/das2Server/hapi?id=Cassini%2FRPWS%2FSurvey_KeyParam%2CE&timerange=2002-01-01
- click "scan>>" next to the x-axis twice to step forward through two days.
- in the console at "AP>", enter "from org.autoplot.hapi import HapiDataSource"
- and in the console at "AP>", "HapiDataSource.printCacheStats()"
This shows that each file loaded into memory, when they should have been unloaded and written to a file.
The bug does not show with a different HAPI source:
vap+hapi:http://jfaden.net/HapiServerDemo/hapi?id=0B000800408DD710&timerange=2018-06-01
The difference between the two is the http://jfaden.net/HapiServerDemo/hapi?id=0B000800408DD710 one has a cadence parameter, which adds the cadence to the start and end, which are then rounded out to the day boundaries, making a three-day request. (Autoplot really needs the feature where both the cache data and small data read are used to satisfy the data request.) The http://planet.physics.uiowa.edu/das/das2Server/hapi?id=Cassini%2FRPWS%2FSurvey_KeyParam%2CE one doesn't have the cadence so the single-day read is used, and it looks like there's nothing to remove the single-day cache entry.
So there are two problems, one with the dependence on the cadence to extend to the next measurement, which is acceptable; and one where the single-day read is not cleared.
This is fixed. The bit of code after the data read loop had inconsistent logic, so it failed to write out the buffered data to the cache.