style="display:inline-block;width:728px;height:90px"
data-ad-client="ca-pub-7505528228218001"
data-ad-slot="1225241371">

New JPacketBufferHandler

3 replies [Last post]
Mark Bednarczyk
Offline
Joined: 03/22/2008

I'm thinking about adding a new type of handler (may be into rev 1.4). Performance is always an issue. This new handler would be more efficient then PcapPacketHandler but less efficient then JBufferHandler. In terms of efficiency, JBufferHandler is the most efficient since it doesn't do any packet copies or processing. It is simply peered with the libpcap provided packet buffer. The negative with this handler is that it must be processed immediately in the loop and can not be stored on a queue unless it is by copy.

Since it is a common requirement to store packets on a queue, buffer, list or whatever so that captured packets can be processed later and possibly in another thread, even when the fast JBufferHandler is used the packet data must still be copied so that it can be stored away, somewhere, temporarily.

This is where the new handler comes in. It would be able to copy both the pcap headers and packet data into a large user buffer as efficiently as possible, until it is filled up. It would then dispatch to the user, but only when the buffer is full. This type of bufferring would be best suitable for high packet rates and non interactive traffic. Although normal pcap timeout and buffer settings would still be applicable allowing for interactive traffic.

The user would call on a specialized Pcap.loop or Pcap.dispatch method, just like its done currently with other types of handlers, and specify the size of the user buffer to use. jNetPcap then would allocate a new type of buffer object, one which stores both a series of pcap headers and packet data, and dispatch that buffer object only when that buffer becomes full. The buffer would be filled by native method as fast as possible, without having to enter java space or do any behind the scene peering on a per packet basis. It would only need to invoke handler method and with no packet peering once the buffer is full.

Here is a mockup code example of how this might look:

Pcap pcap = ...; // Open pcap handle

JPacketBufferHandler<Queue<JPacketBuffer>> handler = new JPacketBufferHandler<Queue<JPacketBuffer>>() {

  public void nextBuffer(JPacketBuffer buffer, Queue<JPacketBuffer>) {
     queue.put(buffer); // buffer suitable to store on a queue
  }
}

int bufferSize = 1024 * 1024; // Request buffering in 1Mb buffers
Queue<JPacketBuffer> queue = ...; // Our storage queue
pcap.loop(bufferSize, handler, queue);

Nothing to extra ordinary here except the frequency with which the nextBuffer method would be invoked. In this example we are asking for 1Mb buffers, which means that nextBuffer would be invoked somewhere between every 500 to 10,000 packets captured, depending on average packet size before the 1Mb buffer would fill up. Also notice that we could be allocating huge amount of memory here 1Mb at a time, therefore it is very important to drain the queue pretty fast as well.

The the second part of an application might use another thread to drain the data from the queue and release that memory we are allocating furiously. Of course, the queue could be consumed by multiple threads leveraging the new processor power available today with multiple cores or chips.

A queue consumer thread might look like this.

public void run() {
   Queue<JPacketBuffer> queue = ...; // From somewhere between threads

   JPacketBuffer packets = queue.take(); // Take the first buffer containing multiple packets

   System.out.printf("packet in buffer = %d\n", packets.getPacketCount());

   Iterator<JPacket> it = packet.iterator();
   while (it.hasNext()) {
     JPacket packet = it.next();
   }

   /*
    * Or more conveniently with java Iterable form:
    */
   for (JPacket packet: packets) {
     // Process packet
   }
}

Since buffer in our example can contain upto 10,000 small packet, that is an awful lot of work that was offloaded from the capture thread onto the queue consumer thread or multiple threads. This leaves the capture threads to buffer packets onto the JPacketBuffer as efficiently as possible.

Behind the scenes the native method would do the following when Pcap.loop is executed:

  1. Setup the pcap_loop or pcap_dispatch call and wait in native packet handler
  2. On packet arrival, if we can still write more packets to user packet buffer
  3. Copy incoming packet headers and data natively into the buffer as efficiently as possible
  4. If user packet buffer is full, dispatch to the user the full buffer and allocate new one

The only peering is when a new buffer is being created and the only entry into the java world is to dispatch a full buffer.

In conclusion, I think this type of buffer would provide the most efficient way to implement a multi threaded packet capture and packet consumer application. It greatly limits the java overhead of peering java objects with native memory and entry into the java world from the capture thread. All the peering, java entry and packet processing work can be offloaded onto other threads which would not interfere with packet capturing allowing maximum packet capture performance.

It would be optimal if we could make libpcap write packets directly into our user memory, such as a user buffer directly. This of course would avoid any packet copies altogether. It may be possible in the future on certain platforms.

I think this will be a good addition to the existing packet handlers.

Sly Technologies, Inc.
http://slytechs.com

Mark Bednarczyk
Offline
Joined: 03/22/2008
This Feature#2907504 is

This Feature#2907504 is officially now being developed. It will be included in next release 1.4 and onward.

The JPacketBufferHandler and new type of jNetPcap container JPacketBuffer have been developed and working. I'm doing testing on performance and efficiency right now.

Sly Technologies, Inc.
http://slytechs.com

Mark Bednarczyk
Offline
Joined: 03/22/2008
This is checked in. I will

This is checked in. I will be releasing it shortly as 1.4.dev1.

Sly Technologies, Inc.
http://slytechs.com

Mark Bednarczyk
Offline
Joined: 03/22/2008
Although this feature has

Although this feature has been implemented, it will not be released as part of official API. The reason for this, is that we are moving away from dynamic memory allocation as a new architectural paradigm. Instead appropriately sized, pre-allocated, ring-buffers will be used instead.

The feature code will instead be moved to tests/java1.5 directory, where it will reside and be available for use for those still wanting to use it this particular way. Ring-buffer based algorithms are much more suited toward high packet rates and enormous amounts of data. The user in the end may choose to copy data out or even entire packets, but that is not something that will happen at the lower levels of jNetPcap OS APIs.

Sly Technologies, Inc.
http://slytechs.com

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.