Here is our recommendation on how to avoid "Out of Memory Exceptions". The newer JVM GC (JRE 6 and 7) utilize parallel GCs which do not work with jNetPcap's DisposableGC. Because of large amount of RAM available in today's systems, it is tempting to increase -Xmx and -Xms parameters to something huge. However this is counter productive and is main reason why OOM messages occur in jNetPcap.
What I would suggest is the following:
1) Using a "serial" GC (for example: -XX:+UseSerialGC)
2) Set -Xmx to whatever you would like for Java allocations within the application, that is actually OK
3) Set -Xms (soft limits) to something rather low like -Xms128m or -Xms256m
4) You can set the jNetPcap native memory allocation limits high with -Dnio.mx=2gb (or whatever you want) and a low -Dnio.ms=128mb
Below is a more detailed explanation why the above JVM settings are recommended.
jNetPcap allocates native memory when using PcapPacketHandler or copying packets manually. This memory is allocated outside JVM memory management scope and as a result JVM GCs know nothing about that memory. Allocating 100% of memory natively will not cause the JVM to start collecting java object garbage.
However, this allocated native memory is tied to Java objects. When those java objects are cleaned by JVM GC, they at the same time release any native memory they hold. This is the purpose of DisposableGC class and its cleanup system thread.
So the key is to have the JVM GC kick in and actually garbage collect the java objects which are out of scope, unreferenced. As you might imagine, large java heap sizes (-Xms16G) for example, will have the opposite effect. The JVM thinks it has plenty of empty memory space, therefore it will leave those java objects around, saving time by not GCing them, because it thinks that it doesn't have to. However those un-cleaned objects are holding on to possibly large amounts of native memory and can easily exhaust what has been specifically reserved by jNetPcap.
So by setting a relatively low -Xms value, which will force JVM GC to kick in early and clean objects more often, is the key.
I know that DisposableGC seems like a big culprit with OOM messages, because that is where they are thrown, but the underlying cause is that JVM GCs are not aggressive enough and leave too many (or all in large heap size cases) unclaimed java objects. Thus exhausting reserved amount of native space. The Java native reference objects are very light-weight and do not have a lot of memory and thus the JVM GC is not getting the required signals.