Thread: [Embedlets-dev] Re: Device polling and timing
Status: Alpha
Brought to you by:
tkosan
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 16:17:02
|
Chris said: > Agreed, if there are time critical activities that need to have > uninterrupted attention, such as receiving bits from a serial stream, then > the JAPL needs to maintain timing control for the duration of the critical > event. At a higher level where the output of the JAPL needs to integrate > with other processes, the embedlet(s) should have the to option to control > the timebase of the system. It is only at the embedlet level that the > overall timing requirements are defined. This may be based on the minimum > Nyquist sampling/update rate of other inputs or the outputs OR client > application reporting requirements. It could be on a timeframe of > milliseconds to days or weeks. In this case the JAPL may be the resource > hog if allowed to run on at a fixed, high rate. Makes a lot of sense to me. One way to help determine where the responsibility lies is to look at whether the timing issue is hardware or application related. If you are producing a stream of pulses on a pin, then the timing is hardware based, and likely should be handled inside of the JAPL implementation. Or if a device needs regular attention to function correctly (eg not overflowing stream buffers). If the application specifications say that you need to take a temperature reading every 5 minutes, this is an application requirement, and should be handled using an Embedlet and the Timer Service. The JAPL driver should not be involved (except when it's API to get the temp gets called). I'm sure there are some grey areas.....but most should fall into hardware of application requirements fairly easily, which will then point to where they should be implemented. If certain device operations are resource intensive (per Brill's comment), then this should be documented clearly for the particular JAPL/Device implementation. I'm not sure it will make sense to complicate the initial implementations by having the device driver level try to manage such restrictions. Let's keep it simple at first, and then see how it evolves. One key issue I see is that stringent device level timing will invariably require threads to manage. Bit of a problem on platforms that do not provide threads. Using interrupts might be a potential workaround in some cases. > The embedlet/JAPL timing specifications will need to be carefully > considered in any mix-and-match design. If one embedlet is demanding > resources or performance beyond those available the system is likely to > fail at some point regardless of whether a particular JAPL or embedlet > manages all of its internal timing needs. Yup....and this would likely produce the symptom of infinitely growing event queues, so it could be detectable in some circumstances. One of the drawbacks of event/message based systems is that the event queues are the "buffer" between processes that can run at different speeds, and if one of those processes becomes a bottleneck, all hell can break loose. Tough problem to solve....but we can mitigate it by making the Event Manager configurable (eg. max event queue sizes.......warning levels and such). But again, let's start simple and see where we end up. Brill said: > In the case of a timer, of course the embedlet system is going to do the > configuration, however it will *not* have direct control over the timer... > it must control the timer through the JAPL contract. The key question here is whether the device/JAPL implementation should manage a timer or not. Unless it's absolutely necessary (due to stringent hardware requirements) I would say that JAPL devices should not implement any internal timers in most cases. See comments above. If they do need an internal timer then of course this should be managed through the JAPL API/Contract. > Oh, I don't mean to say the the JAPL has to do this.. it only need to know > the contract that will allow it to configure the system, listen for events > etc. Not sure what you mean by "events" in this context, Brill? If you mean hardware events (interrupt, receive buffer full, etc.) then I have no problem with this. However, JAPL devices should not know anything about Embedlet events and should not be listening for such. Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 22:04:31
|
Brill points out: > I think you have to assume that *all* devices are resource intensive... of > course, this also depends on your definition of "resource intensive". What > I'm talking about, is that no matter how you slice it, if you talking > (over 1-wire for instance) to a peripheral, the whole 1-Wire system will > be tied up intil that comm is done.. and multiple requests can only happen > in sequence. To me, thats resource intensive. In that situation, there are a couple of ways to deal with the situation. What would be common is that the Embedlet would request the start of such a transfer (1-Wire) using a method exposed by the device driver in JAPL, but then the device driver would be totally responsible for the completion of the request, and for declining or queuing other requests. This basically presumes that the driver is running in it's own thread, or has interrupt callbacks registered so it will be pinged when things happen (next byte required, buffer empty, transfer complete) etc. This might require a low level timer if the physical device needs to be polled or is timing dependent (eg. bit banging various protocols) How the Embedlet knows that the operation is complete can be handled in a few different ways. The JAPL interface could expose a status method (isDone() ) that the Embedlet could poll at it's convenience (note...this might mean there are two timers going...a low level device implementation timer and a higher level, more granular Embedlet polling timer). Nothing wrong with this approach. Or the driver could expose an interrupt-based callback where it would tap the Embedlet on the shoulder (with a callback or JAPL event propagation) when the operation completed. What if a different embedlet tries to issue a different request when the first one is still in progress (and requests have to be serial in nature). Number of options....the JAPL interface could throw a DeviceBusyException plus provide a status method (so the Embedlet could avoid issuing the request and getting the exception), or it could queue the request for later (though queuing is probably not worth the effort in most cases). In all cases, the Embedlet should not care, nor see these internal implementation details that the device driver is using. However, it should also not care if the operation is resource intensive by default.....if it's a long-running operation that has to be handled serially, the driver should not allow the Embedlet to do anything to contravene that. This of course, should be noted in the documentation for that particular device/driver, so that the Embedlet programmer will take it into account (they won't try to re-issue a request till the first one is done, even though the driver would prevent this). At a code level, resource intensiveness is not something an Embedlet would typically care about, nor should it. So I guess I fail to understand your point, Brill. > The Embedlet architecture should not know or care about how the underlaying > system manages its resources. It may use interrupts, or threads, or whatever > else it has to complete the tasks requirested of it by an embedlet. We're in violent agreement again! > the JAPL impl might manage the timer, but the embedlet system will have no > say in how its done, except to configure it though the JAPL. Remember, JAPL > is simply a contract, not an API of executable code. The embedlets *can't* > know how to manipulate some proprietary timer (or other resource) on the > processor, unless the embedlet it's tied to the specific processor (which I > think we're trying not to do). > Also, I don't think you can specify what internal resources the JAPL impl > will be using... a lot of things *need* the hardware timers. For instance, a > lot of bit-banged protocol impl's use the timers, and/or interrupts. Of course...this is just good OO design with encapsulation. But it does not require that all timers be implemented in the JAPL layer. Polling timers at the Embedlet level make a lot of sense too, since some polling may be application (and not hardware) defined. I think you are preaching to the converted. ;-) > I mean that the JAPL, and its underlying code don't need to listen for > Embedlet event... however an embedlet might want to listen for JAPL events. I thought that was what I said. ;-) Good design means that lower level constructs do not know about higher level ones (eg. JAPL does not know about Embedlets), though the reverse is not the case (Embedlets can and will have to know about JAPL devices). > From my point of view, the JAPL impl should know about what it can, and > can't do, and be able to tell the embedlet to piss-off if its abusing the > peripheral in some way. It might also simply block the call (which will be > happening anyway in a threaded environment) or what ever other method is > relevant in the context of the peripheral. Yup.....just like I outlined in the examples earlier in this email. I was not suggesting anything otherwise. > I see this as essential to ensuring the processor is stable, regardless of > what some fool (no accusations) does to the embedlet configuration... from > the user perspective, the embedlet "server" is just a black box, they > wouldn't have a clue about the consequences of a particular configuration, > nor should they have to be concerned that they would take down the server > with their own bad/incorrect code. Well...this is a bit of a pipe dream. Due to the nature of embedded systems, some "leakage" of hardware knowledge (eg. resource intensiveness of the devices, etc.) is probably gonna happen. When you do Servlets, you have to know something about HTTP otherwise you'll design a web application that sucks big time. Same with devices and Embedlet Container configuration. They will have to (eventually) understand some of the considerations....so it behooves us to document what these might be clearly and obviously to help them. Not everything can be hidden and encapsulated totally (nor should it). It takes a while to understand a new container paradigm and to absorb it's quirks and such....that applies to almost any form of software development. I don't think Embedlet Containers will be any different. On an embedded machine, without the protection of virtual memory management, it would be easy to write code that takes down the container. There is no way around that, except to highlight the do's/don'ts for our environment and provide some guidelines. This is especially important in event-driven systems, where runtime behaviour is dynamic and not always deterministic. It's also trivial to write an infinite loop that sucks processor power....and in a non-threaded environment (which some of our platforms might be) you're history. I understand your concerns, but current techniques won't prevent this kind of thing from happening. Our best bet is to do what we can (and is simple) to prevent obvious problems (eg. don't let a developer do something they shouldn't....submit parallel device requests when only serial requests are supported), and provide examples, tutorials, guides and conventions to help them avoid the inevitable problems. And besides, isn't that what testing is for? Any reputable developer would test their Outpost application extensively to shake out such issues prior to production deployment. ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Brill P. <bri...@ro...> - 2003-02-19 05:12:32
|
> How the Embedlet knows that the operation is complete can be handled in a > few different ways. The JAPL interface could expose a status method > (isDone() ) that the Embedlet could poll at it's convenience (note...this might > mean there are two timers going...a low level device implementation timer and > a higher level, more granular Embedlet polling timer). Nothing wrong with this > approach. I like that... and add it to the main peripheral interface. - Brill Pappin |
|
From: Brill P. <bri...@ro...> - 2003-02-18 17:32:50
|
> If certain device operations are resource intensive (per Brill's comment), then > this should be documented clearly for the particular JAPL/Device > implementation. I'm not sure it will make sense to complicate the initial > implementations by having the device driver level try to manage such > restrictions. Let's keep it simple at first, and then see how it evolves. I think you have to assume that *all* devices are resource intensive... of course, this also depends on your definition of "resource intensive". What I'm talking about, is that no matter how you slice it, if you talking (over 1-wire for instance) to a peripheral, the whole 1-Wire system will be tied up intil that comm is done.. and multiple requests can only happen in sequence. To me, thats resource intensive. > One key issue I see is that stringent device level timing will invariably require > threads to manage. Bit of a problem on platforms that do not provide threads. > Using interrupts might be a potential workaround in some cases. The Embedlet architecture should not know or care about how the underlaying system manages its resources. It may use interrupts, or threads, or whatever else it has to complete the tasks requirested of it by an embedlet. > > In the case of a timer, of course the embedlet system is going to do the > > configuration, however it will *not* have direct control over the timer... > > it must control the timer through the JAPL contract. > > The key question here is whether the device/JAPL implementation should > manage a timer or not. Unless it's absolutely necessary (due to stringent > hardware requirements) I would say that JAPL devices should not implement > any internal timers in most cases. See comments above. If they do need an > internal timer then of course this should be managed through the JAPL > API/Contract. the JAPL impl might manage the timer, but the embedlet system will have no say in how its done, except to configure it though the JAPL. Remember, JAPL is simply a contract, not an API of executable code. The embedlets *can't* know how to manipulate some proprietary timer (or other resource) on the processor, unless the embedlet it's tied to the specific processor (which I think we're trying not to do). Also, I don't think you can specify what internal resources the JAPL impl will be using... a lot of things *need* the hardware timers. For instance, a lot of bit-banged protocol impl's use the timers, and/or interrupts. > > Oh, I don't mean to say the the JAPL has to do this.. it only need to know > > the contract that will allow it to configure the system, listen for events > > etc. > > Not sure what you mean by "events" in this context, Brill? If you mean > hardware events (interrupt, receive buffer full, etc.) then I have no problem > with this. However, JAPL devices should not know anything about Embedlet > events and should not be listening for such. I mean that the JAPL, and its underlying code don't need to listen for Embedlet event... however an embedlet might want to listen for JAPL events. I think its vital to separate the low level stuff from the embedlet stuff... a good example is a server... you may have a very robust servlet engine running on it, but the servlet engine doesn't get a whole lot of say in how the server manages its resources. We have the same situation here, and the "Peripherals" simply become Objects that the embedlet can use to do its job, based on the JAPL contract it knows about (which is common to all JAPL implementations). - Brill |