Menu

#1996 hapi caching shows bug with one-day pngwalks

nextrelease
open-fixed
nobody
None
5
2018-06-06
2018-06-05
No

I kept running out of memory when running pngwalks of vaps using the new cassini HAPI servers, and I figured out that it's because data is being loaded into a cache but then is not removed. Try this:

  1. start new Autoplot with HAPI caching enabled.
  2. load vap+hapi:http://planet.physics.uiowa.edu/das/das2Server/hapi?id=Cassini%2FRPWS%2FSurvey_KeyParam%2CE&timerange=2002-01-01
  3. click "scan>>" next to the x-axis twice to step forward through two days.
  4. in the console at "AP>", enter "from org.autoplot.hapi import HapiDataSource"
  5. and in the console at "AP>", "HapiDataSource.printCacheStats()"

This shows that each file loaded into memory, when they should have been unloaded and written to a file.

Discussion

  • Jeremy Faden

    Jeremy Faden - 2018-06-06

    The bug does not show with a different HAPI source:
    vap+hapi:http://jfaden.net/HapiServerDemo/hapi?id=0B000800408DD710&timerange=2018-06-01

     
  • Jeremy Faden

    Jeremy Faden - 2018-06-06

    The difference between the two is the http://jfaden.net/HapiServerDemo/hapi?id=0B000800408DD710 one has a cadence parameter, which adds the cadence to the start and end, which are then rounded out to the day boundaries, making a three-day request. (Autoplot really needs the feature where both the cache data and small data read are used to satisfy the data request.) The http://planet.physics.uiowa.edu/das/das2Server/hapi?id=Cassini%2FRPWS%2FSurvey_KeyParam%2CE one doesn't have the cadence so the single-day read is used, and it looks like there's nothing to remove the single-day cache entry.

    So there are two problems, one with the dependence on the cadence to extend to the next measurement, which is acceptable; and one where the single-day read is not cleared.

     
  • Jeremy Faden

    Jeremy Faden - 2018-06-06
    • status: open --> open-fixed
     
  • Jeremy Faden

    Jeremy Faden - 2018-06-06

    This is fixed. The bit of code after the data read loop had inconsistent logic, so it failed to write out the buffered data to the cache.

     
MongoDB Logo MongoDB