It does make sense. I was indeed presuming a non-preemptive execution model - I should've mentioned that, sorry for the confusion! My attempt was to make sure I could replicate the approaches I'd used in the past in a non-preemptive context, and I'm satisfied that is the case. I totally agree with your last point - with a preemptive kernel (this would be QK, correct?) there's not much value to the self-posting scheme I was thinking of for breaking up long tasks. Using a preemptive kernel definitely...
It does make sense. I was indeed presuming a non-preemptive execution model - I should've mentioned that, sorry for the confusion! My attempt was to make sure I could replicate the approaches I'd used in the past in a non-preemptive context, and I'm satisfied that is the case. I totally agree with your last point - with a preemptive kernel (this would be QK, correct?) there's not much value to the self-posting scheme I was thinking of for breaking up long tasks. Using a preemptive kernel definitely...
Thanks MMS! After reading about it, I think the Deferred Event pattern takes care of most of what I am looking for. Fundamentally, I guess what I'm trying to accomplish is event reordering - to complete "important" stuff before handling "normal" stuff. In the past I've accomplished that with multiple queues, but I think that the event deferral (and servicing deferred events on entry to an Idle state) can get me to the same goal. My example of the long-running task already presumed that I'd broken...
Thanks MMS! After reading about it, I think the Deferred Event pattern takes care of most of what I am looking for. Fundamentally, I guess what I'm trying to accomplish is event reordering - to complete "important" stuff before handling "normal" stuff. In the past I've accomplished that with multiple queues, but I think that the event deferral (and servicing deferred events on entry to an Idle state) can get me to the same goal. My example of the long-running task already presumed that I'd broken...
I should add, I found this thread from a few years ago, with what I think is essentially the same question: https://sourceforge.net/p/qpc/discussion/668726/thread/bb6384d2/ There was a suggestion in there for workaround, but no native support in the framework. So, I guess I'm wondering if a roll-your-own behavior is still the only way, or if the framework has implemented some kind of support since then?
Hello! I've been studying the QP framework documentation as I evaluate it for possible use in a project that I'm working on. Overall it seems like a great set of functionality for specifying system behavior in state machines. There is one concept that I've not seen supported in the documentation and examples so far - multiple, prioritized message queues for an AO. At the simplest, a normal queue where most messages go, and a high-priority queue for important messages. All messages in the high-priority...
Hello! I've been studying the QP framework documentation as I evaluate it for possible use in a project that I'm working on. Overall it seems like a great set of functionality for specifying system behavior in state machines. There is one concept that I've not seen supported in the documentation and examples so far - multiple, prioritized message queues for an AO. At the simplest, a normal queue where most messages go, and a high-priority queue for important messages. All messages in the high-priority...
This appears to be a problem in the Python bindings, not urjtag itself. The urj_pyc_addpart()...
On a little investigation, I found this is a limitation of the Python bindings, not...
No interface to cleanly remove parts (ie remove_part() or clear_parts())
No programmatic way to detect SVF failures.
I should add: Subsequent IR shift operations after the first one are the correct...
Manually-specified JTAG chain contents leads to incorrect IR shifts.