Menu

labview-zmq is terrribly slow compared to other zeromq bindings.

crocket
2013-05-03
2019-02-21
  • crocket

    crocket - 2013-05-03

    I submitted the test results to the maliing list.

    Please view the archives on http://lists.zeromq.org/pipermail/zeromq-dev/2013-May/thread.html

    The topic is "striking difference in performance among ZeroMQ bindings."

     
  • Martijn Jasperse

    I wouldn't claim that labview-zmq has been performance optimised, and there are some fundamental reasons to expect lvzmq to be slower than other C-based bindings (unavailability of zero-copy functionality for one). Please attach a version of your test code exported for LV2010 so that I may look into it.

     
  • crocket

    crocket - 2013-05-06

    I pushed the labview 2010 implementation to https://github.com/crocket/ZeroMQThroughputTest

    Please have a look

     
  • Martijn Jasperse

    Up until now the development focus has been entirely on reliability and thread control (by no means easy) with less emphasis on efficiency. Many of my original ideas for implementation were based on incomplete understanding of zmq's internals and were changed over time.

    Now that the bindings appear stable on several platforms it is worth revisiting efficiency. I have therefore recoded message handling in pure C, which I thought was the most likely candidate.

    Running your test code unmodified on my nothing-special development laptop gave the following results:

    • 10b, 12.4817 (3204.69/s) in v1.3, 0.791045 (50566/s) in v1.4
    • 1kb, 7.25742 (5511.6/s) in v1.3, 0.762043 (52490/s) in v1.4
    • 1024kb, 7.33942 (5450.02/s) in v1.3, 4.49126 (8906.19/s) in v1.4

    The results I observed varied between re-runs by up to a factor of ~3, because the slowdown is almost entirely due to the memory manager being inefficient. Repeating the same test in v1.3 took up to 30s per test

    Still, the speed-up is pretty decent in the new code. The labview memory manager is a twitchy beast so it is possibly less stable, but further testing is required.

     
  • crocket

    crocket - 2013-05-08

    On my company computer, the result is
    155,000 messages/second (10byte)
    123,000 messages/second (1kbyte)
    111,000 messages/second (10kbyte)
    But the result varies somewhat between runs.

    What is it that you refer to as the memory manager?

     
  • Martijn Jasperse

    The "memory manager" in this instance is LabVIEW's own memory allocation system. For historical reasons, it does not use malloc/new. Whenever data in passed between LabVIEW/C it must be reallocated. Using automatic conversion (v1.3) results in several conversions per message, because LabVIEW errs on the side of doing it more often than necessary for various reasons. Coding the conversions manually avoid unnecessary copies, but it is significantly harder to write (due to lack of documentation) and debug.

    ZMQ is designed around a "zero copy" principle to avoid reallocations, but this is not possible in LabVIEW because a conversion must take place, which I believe is the bottleneck in this "maximum throughput" scenario.

     
  • lee

    lee - 2019-02-21

    I couldn't make zmq example work on private network (192.168.xxx.xxx).
    Though it works without a hitch on the public network.
    Does anyone else have similar problem?
    I must be doing something wrong, or lvzmq is not fully supporting all features yet.
    I am using the latest version "labview-zmq-3.5.1.109.vip" with labview 2017 64 bit.

     

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.