Hi,

I am experiencing "memory problems" observing PVs over a long timespan. I've done the following:

Started

* custom CSS product based on 3.2.15
* NSLS-II 3.2.16a (downloaded from website)
* SNS 3.2.15 (downloaded from website)

Two pre-build products:
* with a clean workspace
* defaults as in the ini file
* opened pv-table and added 20 "real" PVs (multiple current doubles) and 6 "sim" PVs

Custom Build:
* same epics config
* custom table with same PVs connected updating table cells (at 4Hz each)

OS: openSUSE 12.3 (x86_64)
Kernel: 3.7.10-1.28-desktop
VM: 1.7.0_51 OpenJDK 64-bit (later also tested with Oracle JDK, see below)

The behaviour observed was that after nearly five days the memory (physical and swap) was almost full.
Memory consumption according to "top" was around 30% each (Resident Memory > 2GB). Heap dumps
of these processes showed "java.lang.ref.Finalizer" occupied > 50% (>16 MB) of the Heap. Finalizer
statistics (Top Components->system class loader) revealed several objects of the following classes:

* java.util.zip.ZipFile$ZipFileInflaterInputStream (thousands)
* java.util.zip.ZipFile$ZipFileInputStream (thousands)
* java.util.zip.ZipFile (hundreds)
* java.util.zip.Inflater (hundreds)
* java.net.SocksSocketImpl (varying hugely between applications)

I have found some hints why this behavior might occur:

Finalizers should generally be avoided (see Effective Java 2nd Edition, Bloch Item 7: "Avoid Finalizers"):
Finalizer threads have low priority. If they cannot keep up with the rate of arriving objects (from threads
with higher priority) queues grow -> heap fills.
There's also a problem with allocated native memory which is not freed during finalization ? And if finalize
fais/exits with errors it is not called again...

The question is where does these zip objects come from. The Profiler showed several eclipse plugins
(which are extracted from jars) as well as several SocksSocketImpl...

Interesting Bug Report:
http://bugs.java.com/view_bug.do?bug_id=4797189

->CHANGING VM

Due to these observations I decided to repeat the experiment but switching the used Java VM = Oracle JDK 1.7.0u60.
Observations were a little different:

"top" memory consumption:
custom build: started with 3.0%, was 11.0% after 40h
NSLS-II: started with 3.0%, was 3.8% after 40h
SNS: started with 3.2%, was 13.5% after 40h

I also observed that with this setup (Oracle VM) the heap size looked more like a sawtooth. When gc was active the
consumption temporarily lowered a bit. Immediately after these events heap dumps temporarily looked better
(less zip stuff). So I assume the effect is slower because gc on Oracle is more aggresive ?

Another metric I observed was the "surviving generations". The 2 hours I observed them, they increased continually.
When triggering GC manually they lowered a bit. A Netbeans Profiler article states that this metric should stabilize
after a while:

https://netbeans.org/kb/articles/nb-profiler-uncoveringleaks_pt1.html

Maybe it has something to do with the native linux zip libraries used ?

Does anyone have experienced similar behavior ?  Or any hints ?

Thanks
Marcus