The VI documentation mentions that an infinite timeout is potentially undesirable, but it doesn't say why. I would expect a poller with infinite timeout to be more desirable (on a non-RTOS) than one that strokes the queues every few ms. Is there a good reason for always setting timeout = 0 in ZMQ and implementing a timeout in the application code instead?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Poll() is blocking and cannot be interrupted. Infinite timeout would cause the LabVIEW process to hang indefinitely unless the poll receives an event. This is also problematic if an error occurs. Using a large finite timeout would have similar problems, though a small finite timeout is probably fine, and would probably work better if your data receive rate is high. I've re-enabled the timeout input so it is up to the user.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is poll() interrupted when the socket gets destroyed (with LINGER == 0)? If so, I could see using a general-purpose error handling strategy of "kill the socket and recreate it".
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Actually, that causes a C-level assertion error crashing the program - I wrote a test program in C because I was unsure. ZMQ is inherently not multithread safe by design; it is illegal to close a socket in use by a blocking call in another thread; that's why there's an elaborate system to handle aborts and interrupts.
It is safe to term() the context, which will result in ETERM from any blocking code, but that will destroy all sockets created from that context, and it is considered bad design to continually create/destroy contexts.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
it is illegal to close a socket in use by a blocking call in another thread;
Given that LV manages thread pooling on its own by "clumping" the block diagram, how does that affect any LV code? I wrote a trivial PUB-SUB example using all static VIs, and it's hanging on the call to zmq_term.vi when I set "reap? = False". I call zmq_close.vi on both sockets before calling zmq_term.vi. Is this being caused by the socket handle getting used in multiple threads?
Last edit: Stobber 2013-10-21
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Any blocking calls are blocking at the C-level, not LabVIEW-level, so LabVIEW considers that thread "busy" and continues executing the diagram in different thread(s). That's part of why LabVIEW is so nice to use, but can easily violate assumptions of linear execution code.
Can you attach your basic example? Provided all sockets belonging to that context were closed before zmq_term it should work. Setting Reap? to True causes automated clean-up at the C-level, so if that works it implies something has gone astray in your diagram.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The VI documentation mentions that an infinite timeout is potentially undesirable, but it doesn't say why. I would expect a poller with infinite timeout to be more desirable (on a non-RTOS) than one that strokes the queues every few ms. Is there a good reason for always setting timeout = 0 in ZMQ and implementing a timeout in the application code instead?
Poll() is blocking and cannot be interrupted. Infinite timeout would cause the LabVIEW process to hang indefinitely unless the poll receives an event. This is also problematic if an error occurs. Using a large finite timeout would have similar problems, though a small finite timeout is probably fine, and would probably work better if your data receive rate is high. I've re-enabled the timeout input so it is up to the user.
Is poll() interrupted when the socket gets destroyed (with LINGER == 0)? If so, I could see using a general-purpose error handling strategy of "kill the socket and recreate it".
Actually, that causes a C-level assertion error crashing the program - I wrote a test program in C because I was unsure. ZMQ is inherently not multithread safe by design; it is illegal to close a socket in use by a blocking call in another thread; that's why there's an elaborate system to handle aborts and interrupts.
It is safe to term() the context, which will result in ETERM from any blocking code, but that will destroy all sockets created from that context, and it is considered bad design to continually create/destroy contexts.
Given that LV manages thread pooling on its own by "clumping" the block diagram, how does that affect any LV code? I wrote a trivial PUB-SUB example using all static VIs, and it's hanging on the call to zmq_term.vi when I set "reap? = False". I call zmq_close.vi on both sockets before calling zmq_term.vi. Is this being caused by the socket handle getting used in multiple threads?
Last edit: Stobber 2013-10-21
Any blocking calls are blocking at the C-level, not LabVIEW-level, so LabVIEW considers that thread "busy" and continues executing the diagram in different thread(s). That's part of why LabVIEW is so nice to use, but can easily violate assumptions of linear execution code.
Can you attach your basic example? Provided all sockets belonging to that context were closed before zmq_term it should work. Setting Reap? to True causes automated clean-up at the C-level, so if that works it implies something has gone astray in your diagram.
I started a new topic for it: https://sourceforge.net/p/labview-zmq/discussion/general/thread/ce7a2c76/#1cf4