The 2.0 release
Just wanted to post a little bit about the jnetpcap 2.0 feature and the roadmap. There has been alot of interest in the analysis feature, that was briefly introduced but then removed from 1.2 release. We are putting analysis back into the API and it will be released in August of this year.
The 2.0 release is going to be a major release that will be quiet a bit different from 1.X release, although the API will not change all that much. However things behind the scenes are changing dramatically.
The first thing to go is the old memory model at the low level. Although you will still find a familiar
JMemory class, its implementation will be significantly different. Specifically, per user requests, we are taking greater control over how native memory and peering is achieved.
There will be 2 new methods added
JMemory.dispose. The first allows an established peer to be unpeered from a memory block. If no peers remain, the memory block may be reused for future allocations. The memory allocation algorithm takes care of the details. With proper peering and subsequent, timely, unpeering it is possible to contain a jnetpcap based application to extrememly small footprint as memory blocks are efficiently reused.
The second method,
JMemory.dispose, is also part of the public API but has a much more drastic effect. It forces all peers to a memory block to be unpeered. If used incorrectly, a user may get a little surprise when the previously peered object suddenly reports a peering exception when accessed. This method will be implicitly used some of the packet dispatcher handlers, which will forcebly unpeer any remaining peers that are still peered to memory that is libpcap controlled. Especially when the user's handler method returns. This will safely make sure that no references remain to libpcap (and other externally controlled memory) after return from a callback or dispatch. Of course, if a memory copy was made to a jnetpcap buffer or java array, that will no be affected.
Further enhancements in terms memory allocation are new memory model types. jNetPcap will now have the ability to allocate 3 types of memories for general API use.
- Plain native memory using malloc or glib's slice allocator (a more efficient memory allocation mechanism)
- java.nio.NativeByteBuffer will be used as allocation mechanism and java's ByteBuffer implementation of garbage collection utilized
- a regular java byte that is natively pinned down in memory. The byte itself wouldn't be any useful, atleast not consistentely between platforms, but when used behind the scenes for allocation purposes, may be a viable alternative. As a byte plays very nice with JVM GC algorithms, this may be another way to provide the JVM the greatest amount of control over memory allocation and cleanup. This will be an experimental feature, also available in August for testing.
Natively allocated memory is now reference counted (both from java and native lands.) This allows extended structures be allocated both from java and native spaces and co-exist. This was a major deficit of 1.X native memory manager. However this enhancement allows analysis and extended information to be attached and tracked extensively. Another words java and native space are now fully integrated for advanced features. This is the reason analysis is now being fully integrated again.
DisposableGC algorithm to perform actual cleanup is almost without change in 2.0 as that algorithm was designed with the above goals in mind (in phase 1 of the redesign.) Instead of actually making a deallocation call,
DisposableGC decrements the native memory reference counter by 1. This may or may not result in actual memory being released. In addition, the memory allocation objects hook into DisposableGC events and have the option of reusing blocks of memory instead of deallocating them.
As I stated above, although these are significant implementation changes, the public API stays pretty much exactly the same.
I've talked about this before, but jnetpcap will be split up into 2 different parts/modules. The first will be the pure libpcap wrapper and the second the core-protocols module. Both of these modules are designed to work together or completely independently of each other. This means that uprade in libpcap's API function calls and changes to jNetPcap wrapper, will be more frequent and seamless, as changes to that part of the API have no effect on the protocols module and visa versa.
The protocols themselves are being broken up into separate modules. The main core-protocols module will be required by all other protocol modules. The core-protocols module will contain all the common and implementation code that all other protocol modules will rely on. All core protocols will also be provided by that core module.
Additional protocol modules will be released, split along the lines of protocol suites. For example, a SS7 protocol suite, which is currently under development, will be provided as a separate module and so forth. This also will allow parts of the ever expanding and complex protocol suites to be upgraded and added without affecting other protocols. This is especially important for stability. A client using 1 protocol suite will not be affect by changes to another protocol suite. Size of these modules also comes into play as only the necessary and required code will have to be downloaded and distributed.
Lastly all protocol modules will be made up of protocol header definitions, packet scanner/dissector, and full analysis. All in each module, per protocol suite. Another words, a protocol suite will contain decoder, headers and analysis objects all in one package.
The 2.0 release is being groomed to fully integrate not only with jNetPcap libpcap wrapper, but also with http://jnetstream.com project. jNetStream uses a higher and easier to use API then existing jNetPcap API does. jNetPcap wrapper, as a standalone module, will be a plugin to jNetStream for live packet capture capabilities. The protocols module, will also integrate seamlessly into jNetStream, just as it does with jNetPcap for packet and protocol decoding and full analysis of each protocol.
Performance wise, we are shooting for 500Kpps live captures using stock libpcap library. Analysis will require additional threads and CPUs/cores to be utilized, but on a 4 core system, will also target the 500Kpps with full analysis. Another words 500Kpps on single CPU thread for packet decode only and 500Kpps full analysis with additional CPUs/cores. Exact benchmarks will be provided at the time of the release, but this is our goal. Hardware accelerated and systems with greater CPU power may achieve much higher throughputs.
These are the main and significant changes planned for the very near future. All feedback is welcome.