Menu

how can I change the maximum size of the parallel queue?

Henrique
2015-03-04
2015-03-06
  • Henrique

    Henrique - 2015-03-04

    I found the maximum size of the atomic queue (EM_QUEUE_ATOMIC_RING_SIZE) in the file em.h, but I can not find the maximum size of the parallel queue.
    I need to change this size, because I have a lot of dropped packets in the function packet_enqueue() (due to full queue) when I use the parallel queue.
    I read that openEM does not have a specific queue for the queue_type_parallel, because it uses the scheduling queues. It is a big problem for me. How can I solve this?

    /*
    * Queues/rings containing free rings for em_queue_create()/queue_init() to
    * use as q_elem->rings for atomic and parallel-ordered EM queues.
    * Parallel EM queues do not require EM queue specific rings - all events are
    * handled directly through the scheduling queues.
    /

     

    Last edit: Henrique 2015-03-04
  • Carl Wallén

    Carl Wallén - 2015-03-05

    Hi,

    Only atomic queues use the q_elem->ring buffer to store events, events in a parallel queue are instead enqueued directly into the scheduling buffers/queues as event order does not have to be maintained in this case (==perf gain).

    You can try increasing the scheduling queue size instead:
    misc/intel/multiring.h

    Original value:
    MRING_SIZE (4*1024)
    Try a higher value
    MRING_SIZE (8*1024) or even (16*1024)
    This will affect the sched-queue size for all queue types (atomic, parallal, parallel-oredered).

    Note that if your data rate is higher than the system can maintain then no amount of buffering will help.

    For some additional I/O preformance you can enable "RX_DIRECT_DISPATCH",
    see event_machine/intel/em_packet.h:
    change the define RX_DIRECT_DISPATCH (0)->(1)
    Be sure you read the comment about queue/event fairness though:
    /*
    * Eth Rx Direct Dispatch: if enabled (=1) will try to dispatch the input Eth Rx event
    * for processing as fast as possible by bypassing the event scheduler queues.
    * Direct dispatch will, however, preserve packet order for each flow and for atomic
    * flows/queues also the atomic context is maintained.
    * Directly dispatching an event reduces the number of enqueue+dequeue operations and keeps the
    * event processing on the same core as it was received on, thus giving better performance.
    * Event scheduling fairness is weakened by enabling direct dispatch as newer events in the system
    * can be dipatched before older events of another queue/flow that are enqueued in the scheduling
    * queues - this is the reason why it has been set to '0' by default. An application that do
    * not care about strict fairness between queues could significantly benefit from enabling this feature.
    /
    #define RX_DIRECT_DISPATCH (0)
    // 0=Off(lower performance, scheduling fairness between queues)
    // 1=On (better performance, weaker scheduling fairness between queues)

     
  • Henrique

    Henrique - 2015-03-06

    Thank you so much :)

     

Log in to post a comment.