embedlets-developer Mailing List for Outpost Embedlet Container (Page 24)
Status: Alpha
Brought to you by:
tkosan
You can subscribe to this list here.
| 2003 |
Jan
(135) |
Feb
(402) |
Mar
(162) |
Apr
(22) |
May
(13) |
Jun
(67) |
Jul
(59) |
Aug
(27) |
Sep
(1) |
Oct
(28) |
Nov
(81) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(2) |
Feb
(21) |
Mar
(6) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(13) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2006 |
Jan
(4) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Brill P. <bri...@ro...> - 2003-02-19 05:07:17
|
I think they are traditionally used mainly for "glue" chip tasks, and can replace a large number of discrete components quickly, and allow updating via. software... similar to uC's but really for a different purpose (although the lines seem to be blurring significantly). - Brill Pappin Rogue Robotics www.roguerobotics.com ----- Original Message ----- From: "Andrzej Jan Taramina" <an...@ch...> To: <emb...@li...> Sent: Tuesday, February 18, 2003 4:36 PM Subject: [Embedlets-dev] [HW] HDL, FPGA and other assorted acronyms.... > Topic tags:[ARCH][JAPL][WIRING][DOCS][MGMT][STRATEGY][NEWBIE] > _______________________________________________ > > Took me a while to figure out what FPGA meant. > > Anyway.....this seems to be very low level hardware oriented stuff. I think that > logica arrays (HDL, JHDL) is a great technology (reminds me of my youth with > digital logic decades ago) but way too low a level for our needs (though it > might make sense for device designers, who would then expose their > hardware with a JAPL interface). Might be some good insights on Graphical > Widing techniques there. But I don't see much regarding our Embedlet > Container implementation. > > > I think that putting Embedlets into an FPGA using this kind of technology could > > prove to be very interesting. > > Wouldn't you use FPGA to design/create/configure a processor that then could > run something like Java/Embedlets? In which case it just looks like another > processor (aJile, Intel, PIC, whatever) to us. Again, it seems that FPGA is > much lower level (hardware design) than we are targeting. > > Unless I missed something. ;-) > > Andrzej Jan Taramina > Chaeron Corporation: Enterprise System Solutions > http://www.chaeron.com > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Embedlets-developer mailing list > Emb...@li... > https://lists.sourceforge.net/lists/listinfo/embedlets-developer |
|
From: Christopher S. <cs...@oo...> - 2003-02-19 03:58:05
|
> Topic tags:[ARCH][JAPL][WIRING][DOCS][MGMT][STRATEGY][NEWBIE] > _______________________________________________ > > I think Chris said (or was it Ted): Chris > > > Your point is accepted. I was digressing to a more general discussion of > > event vs state driven systems because I have run into some > issues on this > > front. > > Turing's Law would imply that all applications are both state and > event based > at some level. I see no clash between these two. In fact, I > would suggest that > applications need both to function. However, some people do not > explicitly > differentiate between the two, and as the lines blur, therein start the > arguments > > Almost all but the most trivial programs have to keep track of > state of some > sort, whether that is of a physical device on a production line, > bank account > balance, user shopping card or whatever. The key is what causes > this state to > change? The receipt of an "event" (I'm using the term in a > generic sense). > The physical device tripped a sensor switch, a deposit was > received or the > user clicked on the "checkout" button. The 'initial' state needs to be considered before events can operate! A switch may have tripped before the embedded processor was powered up, a bank account may have funds before the application was installed.... > > My recent suggestion that we split the Context objects away from the > Embedlet code (and not store state in the embedlet instance > variables) is a > manifestation of this. The Context objects are responsible for > keeping track of > state. The Embedlet Events (and code that produces events) are > responsible > for triggering state changes. One is data-oriented. The other > is process- > oriented. And it's typically a "good thing(tm)" to keep the two > separate and > distinct in application design, since it makes for clean and > maintainable code. The state (in the simplest case) is the value of a switch input or the result of a boolean operation on several inputs. It would not seem to make sense to store this in any place other than the processing component - the embedlet. The model for this is digital logic: Gates are state driven, latches are event driven or synchronized by a clock. They co-exist, but the designer has to be aware of initial conditions, race conditions etc. Moving the state outside of the 'visible' embedlet realm seems artificial and unneccesary. Some embedlets will be state driven and some event driven. OR they may need to to function in a mixed environment. > > I think trying to choose one paradigm or the other is > nonsensical, since you > typically need both in any real-world situation. My initial point was that there ARE mixed paradigms that need to co-exist (my use of 'vs' was a little misleading). The solution that I am striving for would have to accomodate both in a way that does not create mutually exclusive requirements, indeterminate initial values or race conditions. Maybe it is as simple as providing a mechanism for initial conditions to propagate forward on start up. This would limit the requirement for 'reverse chain logic' where an output has to 'look' all the way back into the logical tree to determine a correct value. It would only have to wait for an event and then 'look' at the immediate nodes and assume that they were current. The problem that I see with this is that if an output is dependent on several inputs each of which generates an initial event, the output could change for each event, causing quite a bit of consternation in the chocolate factory. Hence my earlier suggestion of an initialConditions() method prior to start(). This would allow initial conditions to propagate and settle prior to asserting any outputs. The lifecycle sequence would then be: 1. constructor() - the embedlet constructor 2. addListener() - wire them together 3. initialize() - get ready, configure hardware/JAPL 4. initialConditions() - inputs fire initial events, outputs settle 5. start() - outputs asserted, inputs start generating change events ... 6. stop() - shut down event generation 7. terminate() - clean up your mess > > > > Andrzej Jan Taramina > Chaeron Corporation: Enterprise System Solutions > http://www.chaeron.com > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Embedlets-developer mailing list > Emb...@li... > https://lists.sourceforge.net/lists/listinfo/embedlets-developer > |
|
From: Holger B. <ho...@bi...> - 2003-02-18 22:51:24
|
> Unless I missed something. ;-) > Yes, you are missing that this Description Language is a abstraction Layer regardless the level of implementation. There are much more complex things done in descriptive languages, than any embbedded devices should be care of. For example, the aJile Processor is available as a VHDL (AFAIRC, otherwise it is Verilog) description. But there is no problem to have a single NOR as an 'embedlet'. The aJile ASIC himself is developed with this language, for most embedded Controller/Devices exist already HDL descriptions. One is able to describe _any_ digital hardwaredevice, including temparature sensors and mainframe computers. You are able to use these bricks in any other construction and combination. HDL has inputs, outputs, events, vectors of bits, vectors of vectors etc. pp. It is _not_ a modeling language, like UML, it is a reality language. Wiring-, Testing-, Simulation- Compilertools are parts of the language. IMHO, if i read the docs of emblets - you are definig a second approach to HDL. The own OutPost HDL. I do not know ... looks like this second invention of the wheel for me. bax |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 22:15:30
|
I think Chris said (or was it Ted): > Your point is accepted. I was digressing to a more general discussion of > event vs state driven systems because I have run into some issues on this > front. Turing's Law would imply that all applications are both state and event based at some level. I see no clash between these two. In fact, I would suggest that applications need both to function. However, some people do not explicitly differentiate between the two, and as the lines blur, therein start the arguments. Almost all but the most trivial programs have to keep track of state of some sort, whether that is of a physical device on a production line, bank account balance, user shopping card or whatever. The key is what causes this state to change? The receipt of an "event" (I'm using the term in a generic sense). The physical device tripped a sensor switch, a deposit was received or the user clicked on the "checkout" button. My recent suggestion that we split the Context objects away from the Embedlet code (and not store state in the embedlet instance variables) is a manifestation of this. The Context objects are responsible for keeping track of state. The Embedlet Events (and code that produces events) are responsible for triggering state changes. One is data-oriented. The other is process- oriented. And it's typically a "good thing(tm)" to keep the two separate and distinct in application design, since it makes for clean and maintainable code. I think trying to choose one paradigm or the other is nonsensical, since you typically need both in any real-world situation. Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 22:04:31
|
Brill points out: > I think you have to assume that *all* devices are resource intensive... of > course, this also depends on your definition of "resource intensive". What > I'm talking about, is that no matter how you slice it, if you talking > (over 1-wire for instance) to a peripheral, the whole 1-Wire system will > be tied up intil that comm is done.. and multiple requests can only happen > in sequence. To me, thats resource intensive. In that situation, there are a couple of ways to deal with the situation. What would be common is that the Embedlet would request the start of such a transfer (1-Wire) using a method exposed by the device driver in JAPL, but then the device driver would be totally responsible for the completion of the request, and for declining or queuing other requests. This basically presumes that the driver is running in it's own thread, or has interrupt callbacks registered so it will be pinged when things happen (next byte required, buffer empty, transfer complete) etc. This might require a low level timer if the physical device needs to be polled or is timing dependent (eg. bit banging various protocols) How the Embedlet knows that the operation is complete can be handled in a few different ways. The JAPL interface could expose a status method (isDone() ) that the Embedlet could poll at it's convenience (note...this might mean there are two timers going...a low level device implementation timer and a higher level, more granular Embedlet polling timer). Nothing wrong with this approach. Or the driver could expose an interrupt-based callback where it would tap the Embedlet on the shoulder (with a callback or JAPL event propagation) when the operation completed. What if a different embedlet tries to issue a different request when the first one is still in progress (and requests have to be serial in nature). Number of options....the JAPL interface could throw a DeviceBusyException plus provide a status method (so the Embedlet could avoid issuing the request and getting the exception), or it could queue the request for later (though queuing is probably not worth the effort in most cases). In all cases, the Embedlet should not care, nor see these internal implementation details that the device driver is using. However, it should also not care if the operation is resource intensive by default.....if it's a long-running operation that has to be handled serially, the driver should not allow the Embedlet to do anything to contravene that. This of course, should be noted in the documentation for that particular device/driver, so that the Embedlet programmer will take it into account (they won't try to re-issue a request till the first one is done, even though the driver would prevent this). At a code level, resource intensiveness is not something an Embedlet would typically care about, nor should it. So I guess I fail to understand your point, Brill. > The Embedlet architecture should not know or care about how the underlaying > system manages its resources. It may use interrupts, or threads, or whatever > else it has to complete the tasks requirested of it by an embedlet. We're in violent agreement again! > the JAPL impl might manage the timer, but the embedlet system will have no > say in how its done, except to configure it though the JAPL. Remember, JAPL > is simply a contract, not an API of executable code. The embedlets *can't* > know how to manipulate some proprietary timer (or other resource) on the > processor, unless the embedlet it's tied to the specific processor (which I > think we're trying not to do). > Also, I don't think you can specify what internal resources the JAPL impl > will be using... a lot of things *need* the hardware timers. For instance, a > lot of bit-banged protocol impl's use the timers, and/or interrupts. Of course...this is just good OO design with encapsulation. But it does not require that all timers be implemented in the JAPL layer. Polling timers at the Embedlet level make a lot of sense too, since some polling may be application (and not hardware) defined. I think you are preaching to the converted. ;-) > I mean that the JAPL, and its underlying code don't need to listen for > Embedlet event... however an embedlet might want to listen for JAPL events. I thought that was what I said. ;-) Good design means that lower level constructs do not know about higher level ones (eg. JAPL does not know about Embedlets), though the reverse is not the case (Embedlets can and will have to know about JAPL devices). > From my point of view, the JAPL impl should know about what it can, and > can't do, and be able to tell the embedlet to piss-off if its abusing the > peripheral in some way. It might also simply block the call (which will be > happening anyway in a threaded environment) or what ever other method is > relevant in the context of the peripheral. Yup.....just like I outlined in the examples earlier in this email. I was not suggesting anything otherwise. > I see this as essential to ensuring the processor is stable, regardless of > what some fool (no accusations) does to the embedlet configuration... from > the user perspective, the embedlet "server" is just a black box, they > wouldn't have a clue about the consequences of a particular configuration, > nor should they have to be concerned that they would take down the server > with their own bad/incorrect code. Well...this is a bit of a pipe dream. Due to the nature of embedded systems, some "leakage" of hardware knowledge (eg. resource intensiveness of the devices, etc.) is probably gonna happen. When you do Servlets, you have to know something about HTTP otherwise you'll design a web application that sucks big time. Same with devices and Embedlet Container configuration. They will have to (eventually) understand some of the considerations....so it behooves us to document what these might be clearly and obviously to help them. Not everything can be hidden and encapsulated totally (nor should it). It takes a while to understand a new container paradigm and to absorb it's quirks and such....that applies to almost any form of software development. I don't think Embedlet Containers will be any different. On an embedded machine, without the protection of virtual memory management, it would be easy to write code that takes down the container. There is no way around that, except to highlight the do's/don'ts for our environment and provide some guidelines. This is especially important in event-driven systems, where runtime behaviour is dynamic and not always deterministic. It's also trivial to write an infinite loop that sucks processor power....and in a non-threaded environment (which some of our platforms might be) you're history. I understand your concerns, but current techniques won't prevent this kind of thing from happening. Our best bet is to do what we can (and is simple) to prevent obvious problems (eg. don't let a developer do something they shouldn't....submit parallel device requests when only serial requests are supported), and provide examples, tutorials, guides and conventions to help them avoid the inevitable problems. And besides, isn't that what testing is for? Any reputable developer would test their Outpost application extensively to shake out such issues prior to production deployment. ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 21:37:08
|
One of the challenges I see, regarding Graphical Wiring, is the fact that the Embedlet container uses a dynamic, event-driven paradigm which borrows from pub/sub messaging techniques (albeit in a lightweight manner). What this means is that there really are no fixed "connections" or "wires" between embedlets, per se. Everything is mediated by the Event Manager, which manages the queues and forwards produced events to registered consumers dynamically at runtime. When an developer creates an embedlet (or service or adapter) they will specify what event types the embedlet consumes and/or produces in the embedlet.xml config file (see latest Arch Disc Doc v1.4 for a more detailed discussion on this, in the appendix on Lifecycle dependency management). What this means is the container (or the Wiring tool) will know what the event inputs and outputs are when you drop an Embedlet on the wiring board. Now if you drop another Embedlet on the board, it will know what it produces/consumes too. That is sufficient information (in many cases) for the Graphical Wiring tool (or the Event Manager in a container) to automatically "wire" the two together. If one embedlet produces Events of type A and another consumes Events of type A, then the tool can automagically draw the line between the two embedlets. There would be no need for the user to manually draw such a connection. A complication arises if an Embedlet only wants to receive (or send) events of a particular type to a specific Embedlet (or collection of Embedlets) rather than all consumers of that particular event type. Then the user employing the Wiring Tool would have to place restrictions on which embedlets would receive the events. For instance, they could delete "wires" that they didn't want. This implies, however, that our Event Manager has some more complicated "matching" capability that can be employed by Embedlets so that you don't just register as a consumer of a particular type of event, but that type PLUS some specific properties (sender/receiver name, value, etc.) in the event itself. This is not a big deal from an EM Service implementation perspective (though we need to walk a fine line with how "fancy" we get on the conditions, at least for the intial incarnations). But it does raise some questions on how you visually specify these additional conditions in a Graphical tool (or maybe you do it with a pulldown tab/window rather than using visual wiring metaphors)? For instance...do you draw all the "connections" based only on event type (and no other conditions) and then allow the user to modify/delete those they don't want? Or do you start with no connections and only add the ones you want? One of the beauties of pub/sub is the publisher usually doesn't care who is getting the events (decoupled messages), which defines "filtering" from the perspective of the consumer only (all the producer does is "tag" the message with properties that consumers might want to use for filtering purposes). This provides for a very dynamic, self-configurable, and easily extensible environment (eg. to add logging of a particular event type, you just drop in a logging Embedlet that registers interest in receiving that particular event, and the producer neither cares nor knows that a new consumer was added). But there are situations (maybe for security reasons) that the producer might want to specify what consumers are eligible to receive a particular event. Or maybe we'll need filtering at both consumer and producer ends. I'm sure there are some other ways of doing this besides just the approaches I've listed above, but regardless, the visual metaphors to support the underlying event/messaging concepts (whichever ones we choose to implement) are not always as intuitively obvious as those for a fixed connection situation. Get that GW thinking cap on, Ted! It'll keep ya warm till the power comes back on! ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 21:37:03
|
Took me a while to figure out what FPGA meant. Anyway.....this seems to be very low level hardware oriented stuff. I think that logica arrays (HDL, JHDL) is a great technology (reminds me of my youth with digital logic decades ago) but way too low a level for our needs (though it might make sense for device designers, who would then expose their hardware with a JAPL interface). Might be some good insights on Graphical Widing techniques there. But I don't see much regarding our Embedlet Container implementation. > I think that putting Embedlets into an FPGA using this kind of technology could > prove to be very interesting. Wouldn't you use FPGA to design/create/configure a processor that then could run something like Java/Embedlets? In which case it just looks like another processor (aJile, Intel, PIC, whatever) to us. Again, it seems that FPGA is much lower level (hardware design) than we are targeting. Unless I missed something. ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 21:01:13
|
Hi all embedletites! I just posted a revised version of the Embedlet/Outpost Architecture Discussion Document (ver 1.4) in the CVS repository. Basically, I've summarized and included some of the recent discussions/design decisions that we had over the past few weeks, so that they are "preserved" in one place. New stuff in this version includes: - Merged Config and Context Services into a single Context Service - Added note to Lifecycle service that it will co-operate with the Context Service. - Added a paragraph to the Event Management Service regarding event queue overflows - Added Appendix C: Context, Lifecycle & Persistence Services design discussions (including our Serialization thoughts). - Added Appendix D: Embedlets, Services & Adapters, Oh My! - Added Appendix E: Lifecycle Dependency Management The bulk of the new stuff is in the three new Appendices. Enjoy! Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Christopher S. <cs...@oo...> - 2003-02-18 20:57:22
|
> > Topic tags:[ARCH][JAPL][WIRING][DOCS][MGMT][STRATEGY][NEWBIE] > _______________________________________________ > > Chris postulates a polling scenario: > > > 1. An output (embedlet) controls a raw material supply function and must > > be off on startup. 2. The output is controlled by a threshold Embedlet > > monitoring the raw material supply level. 3. The threshold embedlet is > > event driven and fires when the level exceeds an upper value or drops > > below a lower threshold. 4. When the level is low the output should come > > on to start the raw material feed. When the upper level is exceeded the > > output should turn off to stop the supply. > > > > The problem occurs when the supply is initially low and not changing > > because the output is off. The threshold embedlet sees no change so the > > event is not fired and the output remains off... the > manufacturing process > > cannot start. > > The way I envision you would handle this scenario is as follows: > > 1) The Threshold embedlet would be listed as "dependent" on the Output > embedlet in the application config/wiring file (this might be > derived from who > consumes and who produces events of a particular type). Using this > dependency info, the lifecycle service will ensure that the > Output embedlet is > started and listening for events first. We might need to find a > better word than > "dependent" here....since the usage is a bit backwards. From an > event/messaging paradigm, consumers should be started prior to producers, > for obvious reasons. > > 2) The Threshold embedlet will thus be assured that when it's > start() lifecycle > method is called, that the Output embedlet is started and running > (listening for > the threshold events). > > 3) The Threshold embedlet would be responsible (in the start() > method) for > initializing itself. That is, it should check the raw material > supply level (likely > using a JAPL API call to the sensor that measures this), and if > the level is low, > it should post a "tart material feed" threshold event. This is what I am refering to as 'reverse chain' because if the Threshold is preceeded by other embedlets such as a calibrator the processing flow is going in reverse, based on demand as opposed to forward based on events. The Threshold embedlet is 'demanding' the state of the embedlet(s) that preceed it in the event chain. On startup the preceeding 'producer' embedlets will not have been started since the 'consumer' needs to start first! If every embedlet posts an initial event in its 'start()' method the consumers will be firing events base on incomplete information. The OopScope OPC container does what you have proposed (minus the reverse chain): 1. Initialize and link components in event order. 2. Start the components in reverse event order. This enables the event source/listener chain. 3. Process events as they occur.... 4. Stop the components in forward event order to 'dry up' the events. This straight-forward method has run into the stumbling block that I outlined, both with boolean and numeric processing. I have worked around it by making critical logic flows level sensitive, driven by a timer so that an event is generated every timer cycle. This bypasses the efficiencies of an event driven system, however. I am thinking that embedlets that are 'Inputs' need to have a checkInitailConditions() method that is called prior to start(). This would allow input embedlets to stage an event to fire when started based on the expected state of the input relative to actual. Or maybe this is just built into the start() method of inputs and is a documented not enforced behavior. > > 4) The Output embedlet would then receive this event at startup, > and all would > be well. > > The solution is based on the LifeCycle service controlling the > sequence in > which related/dependent Embedlets are started up, based on the config > information. > > Andrzej Jan Taramina > Chaeron Corporation: Enterprise System Solutions > http://www.chaeron.com > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Embedlets-developer mailing list > Emb...@li... > https://lists.sourceforge.net/lists/listinfo/embedlets-developer > |
|
From: Brill P. <bri...@ro...> - 2003-02-18 17:43:24
|
> Your point is accepted. I was digressing to a more general discussion of > event vs state driven systems because I have run into some issues on this > front. I understand, been in a few heated ones on this subject myself ;) - Brill Pappin |
|
From: Brill P. <bri...@ro...> - 2003-02-18 17:43:16
|
> The embedlet/JAPL timing specifications will need to be carefully considered > in any mix-and-match design. If one embedlet is demanding resources or > performance beyond those available the system is likely to fail at some > point regardless of whether a particular JAPL or embedlet manages all of its > internal timing needs. From my point of view, the JAPL impl should know about what it can, and can't do, and be able to tell the embedlet to piss-off if its abusing the peripheral in some way. It might also simply block the call (which will be happening anyway in a threaded environment) or what ever other method is relevant in the context of the peripheral. I see this as essential to ensuring the processor is stable, regardless of what some fool (no accusations) does to the embedlet configuration... from the user perspective, the embedlet "server" is just a black box, they wouldn't have a clue about the consequences of a particular configuration, nor should they have to be concerned that they would take down the server with their own bad/incorrect code. - Brill Pappin |
|
From: Brill P. <bri...@ro...> - 2003-02-18 17:32:50
|
> If certain device operations are resource intensive (per Brill's comment), then > this should be documented clearly for the particular JAPL/Device > implementation. I'm not sure it will make sense to complicate the initial > implementations by having the device driver level try to manage such > restrictions. Let's keep it simple at first, and then see how it evolves. I think you have to assume that *all* devices are resource intensive... of course, this also depends on your definition of "resource intensive". What I'm talking about, is that no matter how you slice it, if you talking (over 1-wire for instance) to a peripheral, the whole 1-Wire system will be tied up intil that comm is done.. and multiple requests can only happen in sequence. To me, thats resource intensive. > One key issue I see is that stringent device level timing will invariably require > threads to manage. Bit of a problem on platforms that do not provide threads. > Using interrupts might be a potential workaround in some cases. The Embedlet architecture should not know or care about how the underlaying system manages its resources. It may use interrupts, or threads, or whatever else it has to complete the tasks requirested of it by an embedlet. > > In the case of a timer, of course the embedlet system is going to do the > > configuration, however it will *not* have direct control over the timer... > > it must control the timer through the JAPL contract. > > The key question here is whether the device/JAPL implementation should > manage a timer or not. Unless it's absolutely necessary (due to stringent > hardware requirements) I would say that JAPL devices should not implement > any internal timers in most cases. See comments above. If they do need an > internal timer then of course this should be managed through the JAPL > API/Contract. the JAPL impl might manage the timer, but the embedlet system will have no say in how its done, except to configure it though the JAPL. Remember, JAPL is simply a contract, not an API of executable code. The embedlets *can't* know how to manipulate some proprietary timer (or other resource) on the processor, unless the embedlet it's tied to the specific processor (which I think we're trying not to do). Also, I don't think you can specify what internal resources the JAPL impl will be using... a lot of things *need* the hardware timers. For instance, a lot of bit-banged protocol impl's use the timers, and/or interrupts. > > Oh, I don't mean to say the the JAPL has to do this.. it only need to know > > the contract that will allow it to configure the system, listen for events > > etc. > > Not sure what you mean by "events" in this context, Brill? If you mean > hardware events (interrupt, receive buffer full, etc.) then I have no problem > with this. However, JAPL devices should not know anything about Embedlet > events and should not be listening for such. I mean that the JAPL, and its underlying code don't need to listen for Embedlet event... however an embedlet might want to listen for JAPL events. I think its vital to separate the low level stuff from the embedlet stuff... a good example is a server... you may have a very robust servlet engine running on it, but the servlet engine doesn't get a whole lot of say in how the server manages its resources. We have the same situation here, and the "Peripherals" simply become Objects that the embedlet can use to do its job, based on the JAPL contract it knows about (which is common to all JAPL implementations). - Brill |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 16:17:02
|
Chris said: > Agreed, if there are time critical activities that need to have > uninterrupted attention, such as receiving bits from a serial stream, then > the JAPL needs to maintain timing control for the duration of the critical > event. At a higher level where the output of the JAPL needs to integrate > with other processes, the embedlet(s) should have the to option to control > the timebase of the system. It is only at the embedlet level that the > overall timing requirements are defined. This may be based on the minimum > Nyquist sampling/update rate of other inputs or the outputs OR client > application reporting requirements. It could be on a timeframe of > milliseconds to days or weeks. In this case the JAPL may be the resource > hog if allowed to run on at a fixed, high rate. Makes a lot of sense to me. One way to help determine where the responsibility lies is to look at whether the timing issue is hardware or application related. If you are producing a stream of pulses on a pin, then the timing is hardware based, and likely should be handled inside of the JAPL implementation. Or if a device needs regular attention to function correctly (eg not overflowing stream buffers). If the application specifications say that you need to take a temperature reading every 5 minutes, this is an application requirement, and should be handled using an Embedlet and the Timer Service. The JAPL driver should not be involved (except when it's API to get the temp gets called). I'm sure there are some grey areas.....but most should fall into hardware of application requirements fairly easily, which will then point to where they should be implemented. If certain device operations are resource intensive (per Brill's comment), then this should be documented clearly for the particular JAPL/Device implementation. I'm not sure it will make sense to complicate the initial implementations by having the device driver level try to manage such restrictions. Let's keep it simple at first, and then see how it evolves. One key issue I see is that stringent device level timing will invariably require threads to manage. Bit of a problem on platforms that do not provide threads. Using interrupts might be a potential workaround in some cases. > The embedlet/JAPL timing specifications will need to be carefully > considered in any mix-and-match design. If one embedlet is demanding > resources or performance beyond those available the system is likely to > fail at some point regardless of whether a particular JAPL or embedlet > manages all of its internal timing needs. Yup....and this would likely produce the symptom of infinitely growing event queues, so it could be detectable in some circumstances. One of the drawbacks of event/message based systems is that the event queues are the "buffer" between processes that can run at different speeds, and if one of those processes becomes a bottleneck, all hell can break loose. Tough problem to solve....but we can mitigate it by making the Event Manager configurable (eg. max event queue sizes.......warning levels and such). But again, let's start simple and see where we end up. Brill said: > In the case of a timer, of course the embedlet system is going to do the > configuration, however it will *not* have direct control over the timer... > it must control the timer through the JAPL contract. The key question here is whether the device/JAPL implementation should manage a timer or not. Unless it's absolutely necessary (due to stringent hardware requirements) I would say that JAPL devices should not implement any internal timers in most cases. See comments above. If they do need an internal timer then of course this should be managed through the JAPL API/Contract. > Oh, I don't mean to say the the JAPL has to do this.. it only need to know > the contract that will allow it to configure the system, listen for events > etc. Not sure what you mean by "events" in this context, Brill? If you mean hardware events (interrupt, receive buffer full, etc.) then I have no problem with this. However, JAPL devices should not know anything about Embedlet events and should not be listening for such. Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Andrzej J. T. <an...@ch...> - 2003-02-18 16:16:47
|
Chris postulates a polling scenario: > 1. An output (embedlet) controls a raw material supply function and must > be off on startup. 2. The output is controlled by a threshold Embedlet > monitoring the raw material supply level. 3. The threshold embedlet is > event driven and fires when the level exceeds an upper value or drops > below a lower threshold. 4. When the level is low the output should come > on to start the raw material feed. When the upper level is exceeded the > output should turn off to stop the supply. > > The problem occurs when the supply is initially low and not changing > because the output is off. The threshold embedlet sees no change so the > event is not fired and the output remains off... the manufacturing process > cannot start. The way I envision you would handle this scenario is as follows: 1) The Threshold embedlet would be listed as "dependent" on the Output embedlet in the application config/wiring file (this might be derived from who consumes and who produces events of a particular type). Using this dependency info, the lifecycle service will ensure that the Output embedlet is started and listening for events first. We might need to find a better word than "dependent" here....since the usage is a bit backwards. From an event/messaging paradigm, consumers should be started prior to producers, for obvious reasons. 2) The Threshold embedlet will thus be assured that when it's start() lifecycle method is called, that the Output embedlet is started and running (listening for the threshold events). 3) The Threshold embedlet would be responsible (in the start() method) for initializing itself. That is, it should check the raw material supply level (likely using a JAPL API call to the sensor that measures this), and if the level is low, it should post a "tart material feed" threshold event. 4) The Output embedlet would then receive this event at startup, and all would be well. The solution is based on the LifeCycle service controlling the sequence in which related/dependent Embedlets are started up, based on the config information. Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
|
From: Holger B. <ho...@bi...> - 2003-02-18 07:05:44
|
> Holger, > > Nice to have you on board! .. i love it to being back ;-) >> What do think about Hardware Description Language? > [...] >> I have a strong feeling ... > How about bypassing HDL by compiling Java directly into a FPGA!? > > http://www.xilinx.com/ise/advanced/forge.htm Yes, i am an evaluator of this tech too, but it has the old drawback: no reflection, no introspection, no dynamic classloading. Than i will stay with asm, pacal, or at least c. > I have just signed up for the free 3 month Forge test period and I am > in the > process of trying to decide what Spartan development board to purchse. ohh, may i help you? Take the from burch at http://www.burched.com.au/ it is faily cheap for the usable size, has many goodies avail and he is a really nice guy :) > I think that putting Embedlets into an FPGA using this kind of > technology could > prove to be very interesting. This is the direction i will definetely go. But i think it is too early to build the wave. Meanwhile i have opened a project at www.opencores.org > I would love to hear your impressions of Forge when you get a chance. As i said above: i will not go deeper into this because of the lacking features. There are more javaishing implementations (not effortable ip cores) on the market. But at all, my concern was not to bind the embedlets to gate arrays or pour our PIC's/AVR's/8051'ths into these. I will point my finger to the already existing HDL, especially to the JHDL. There are the main features needed for the Embedlets already implemented. Inclusive the necessary abstraction for the tools. I would suggest: load JHDL down, go through the Getting Started, have a look at the class and package structure - i am sure you will find something... We should be able to describe the single sensors/actors and calculators/performers in a similar manner. Based on standards like VHDL and EDIF, compilable from java to a bitstream or to what we need. Not sure that my english is sufficient and my brain is powerfull at 8:00am without sleep. ... will be back bax |
|
From: Ted K. <tk...@ya...> - 2003-02-18 06:41:00
|
Holger, Nice to have you on board! > What do think about Hardware Description Language? [...] > I have a strong feeling ... How about bypassing HDL by compiling Java directly into a FPGA!? http://www.xilinx.com/ise/advanced/forge.htm I have just signed up for the free 3 month Forge test period and I am in the process of trying to decide what Spartan development board to purchse. I think that putting Embedlets into an FPGA using this kind of technology could prove to be very interesting. I would love to hear your impressions of Forge when you get a chance. Ted __________________________________________________ Do you Yahoo!? Yahoo! Shopping - Send Flowers for Valentine's Day http://shopping.yahoo.com |
|
From: Holger B. <ho...@bi...> - 2003-02-18 06:13:21
|
hi there all, if have studied so far i am able of :) the docs on the sf.net site. And i have had a vision <seldom occured, but mostly appreciated at least by the submitter/>. What do think about Hardware Description Language? What do you think about HDL _implemented_, _compiled_ and _executed_ in Java? Just have a look at JHDL at: http://www.jhdl.org I have a strong feeling ... bax PS: There is something in my mind, but it's not me ... |
|
From: Christopher S. <cs...@oo...> - 2003-02-18 05:43:30
|
> > Topic tags:[ARCH][JAPL][WIRING][DOCS][MGMT][STRATEGY][NEWBIE] > _______________________________________________ > > > In regards to event vs state driven control: I see an issue > with strictly > > event driven Embedlets. The scenario for an industrial control is: > > Sorry guys... I don't think I was very clear... I wasn't saying > it should be > purely event driven (or at least that's not what I meant)... what I was > getting at is that the embedlet container should not have direct/exclusive > control over the drivers and the impl in general.. all it should > know about > anything is how to configure and read the devices, based on the contract > exposed by the JAPL. Brill, Your point is accepted. I was digressing to a more general discussion of event vs state driven systems because I have run into some issues on this front. > > - Brill Pappin > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Embedlets-developer mailing list > Emb...@li... > https://lists.sourceforge.net/lists/listinfo/embedlets-developer > |
|
From: Christopher S. <cs...@oo...> - 2003-02-18 02:20:17
|
> > Topic tags:[ARCH][JAPL][WIRING][DOCS][MGMT][STRATEGY][NEWBIE] > _______________________________________________ > > > > In this way the system can be built up from mix-and-match > components that > > may come from different vendors and no one component has > disproportionate > > 'control'. > > > > This also gives you the flexibility to have local independent loops that > run > > at different rates if that is a requirement. You (the Embedlet > user) gets > to > > determine who is in control! You (the Embedlet designer) do no have to > > accommodate multiple, mutually exclusive requirements. > > Control is good... but there are situations (a lot of them) where > you really > don't want users to be able to mess with the internals... I guess the > embedlet user does need to be able to take some control, but I think it > should be limited to what the environment deems "safe" or efficient... for > instance, if the embedlet container wants to poll temperature every 500uS, > the JAPL implementer is not going to allow the container to force > me to make > a call to actually read the temperature that often (depending on the > granularity and speed of the sensors etc.) because it takes up resources I > need for other functions, and that I don't really want to give up to the > embedlet container. I was not suggestng the the container provide fixed timing control, rather that timing embedlets provide that function and get wired to the inputs and outputs as required for polling operations. This provides for multiple local timing loops that can accommodate the particular sensor or system requirements. > > So, I guess my point is that though you can delegate some control to the > embedlets, in a lot of circumstances, the underlying JAPL impl. > *must* keep > control over itself, and prevent the embedlet(s) for putting it into an > unstable or inoperative state. Agreed, if there are time critical activities that need to have uninterrupted attention, such as receiving bits from a serial stream, then the JAPL needs to maintain timing control for the duration of the critical event. At a higher level where the output of the JAPL needs to integrate with other processes, the embedlet(s) should have the to option to control the timebase of the system. It is only at the embedlet level that the overall timing requirements are defined. This may be based on the minimum Nyquist sampling/update rate of other inputs or the outputs OR client application reporting requirements. It could be on a timeframe of milliseconds to days or weeks. In this case the JAPL may be the resource hog if allowed to run on at a fixed, high rate. The embedlet/JAPL timing specifications will need to be carefully considered in any mix-and-match design. If one embedlet is demanding resources or performance beyond those available the system is likely to fail at some point regardless of whether a particular JAPL or embedlet manages all of its internal timing needs. > > - Brill Pappin > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Embedlets-developer mailing list > Emb...@li... > https://lists.sourceforge.net/lists/listinfo/embedlets-developer > |
|
From: Brill P. <bri...@ro...> - 2003-02-18 01:39:56
|
> In regards to event vs state driven control: I see an issue with strictly > event driven Embedlets. The scenario for an industrial control is: Sorry guys... I don't think I was very clear... I wasn't saying it should be purely event driven (or at least that's not what I meant)... what I was getting at is that the embedlet container should not have direct/exclusive control over the drivers and the impl in general.. all it should know about anything is how to configure and read the devices, based on the contract exposed by the JAPL. - Brill Pappin |
|
From: Brill P. <bri...@ro...> - 2003-02-18 01:39:49
|
> I have to side with Chris on this. Creating a JAPL interface layer that does > device-level protocol neutralization and provides a solid generic interface to > arbitrary devices is challenging enough. Then there will be the task of > providing JAPL interfaces for the large quantity of unique hardware devices > out there. KISS principle would seem to be a wise way to go for the JAPL > layer, just due to the size of the core task. Oh, I don't mean to say the the JAPL has to do this.. it only need to know the contract that will allow it to configure the system, listen for events etc. however, the JAPL can't do a darn thing without an implementation... which is going to have to take a lot of this into accoutn anyway... luckily becuase COrk is in its second generation, a lot of the infrastructure is already in place... when I finish a ref impl of the JAPL, I'll be able to simply hook it up to cork to provide the underlaying code... and cork was designed in second rev to be ported to other platforms (like TINI). > Let JAPL expose a common device interface. But let the Embedlets determine > how/when that interface should be invoked. Clean....simple...decoupled. Thats what I mean, the JAPL is *just* a contract, it says nothing about the hardware its running on etc... it only allow the embedlets to know how to call the underlaying drivers. > This approach has some other key benefits. JAPL interfaces stay simple and > could be used by other embedded Java solutions (for example, hardcoded > solutions that don't use an Embedlet container). It also allows Embedlets to > use non-JAPL enabled devices easily, yet still leverage common services like > Timer/Events as needed. Embedlets will also be configured using XML > syntax, so that is a logical place to put things like timing requirements (how > often to poll a device). Keep in mind that Timers will also be used for other > purposes (eg. sending a status update to a remote enterprise system on a > periodic basis, Management services, etc), not just for device polling. Yup, that's understood... however the JAPL doesn't need to be so tied to Embedlet (and I don't think it should be) that it *requires* and XML parameter to function. In the case of a timer, of course the embedlet system is going to do the configuration, however it will *not* have direct control over the timer... it must control the timer through the JAPL contract. - Brill Pappin |
|
From: Brill P. <bri...@ro...> - 2003-02-18 01:29:12
|
> Logical functions are by nature state and not event driven. You can make > them event driven by performing the logical function in response to a > change, however this leads to initial state problems and race conditions. Or > you can use a timer to issue an event that causes the logical deveices to > poll their state and assert some output. Complex systems are usually a mix > of state (polled) and event driven components. Yes, I understand what you are saying... I saying that there is no way a good implementation should leave the inner workings of the hardware to the enterprise developer... not if that's who we're still targeting... I tend to think its bad practice for an API to expose everything it does... it allows to much in the way of proprietary code, security holes and simple failure from developers who don't know the caveats of a particular set of hardware. IMO the embedlets are the business logic on top of the device pool... the device pool knows what it has and what it can do, the embedlets are the place where the information from the devices are configured, processed and distributed. - Brill Pappin |
|
From: Brill P. <bri...@ro...> - 2003-02-18 01:23:52
|
Apache is fine with me ;) > I think we also need to get just a touch further down the road before we open > the floodgates and attract the level of attention that Apache partnership would > bring. A small tight group is more likely to produce a good Version 1.0 [...] > I think it would be better if we had more than just an "idea" before we approach > Apache, even informally. I agree, I'd like to have something ready and working to be able to demonstrate the ideas are sound, and work out any killer architecture "mistakes" we may have made (nothing beats practical application). I've been working on the JAPL for a bit, when I have time... once that's done, I'll reemployment Cork onto TINI so there is a basis to work from, and something for the embedlet impl to do! - Brill Pappin |
|
From: Brill P. <bri...@ro...> - 2003-02-18 01:18:33
|
> In this way the system can be built up from mix-and-match components that > may come from different vendors and no one component has disproportionate > 'control'. > > This also gives you the flexibility to have local independent loops that run > at different rates if that is a requirement. You (the Embedlet user) gets to > determine who is in control! You (the Embedlet designer) do no have to > accommodate multiple, mutually exclusive requirements. Control is good... but there are situations (a lot of them) where you really don't want users to be able to mess with the internals... I guess the embedlet user does need to be able to take some control, but I think it should be limited to what the environment deems "safe" or efficient... for instance, if the embedlet container wants to poll temperature every 500uS, the JAPL implementer is not going to allow the container to force me to make a call to actually read the temperature that often (depending on the granularity and speed of the sensors etc.) because it takes up resources I need for other functions, and that I don't really want to give up to the embedlet container. So, I guess my point is that though you can delegate some control to the embedlets, in a lot of circumstances, the underlying JAPL impl. *must* keep control over itself, and prevent the embedlet(s) for putting it into an unstable or inoperative state. - Brill Pappin |
|
From: Gregg G. W. <gr...@sk...> - 2003-02-18 01:00:15
|
>The best example of this was the introduction of tables, first >supported in Netscape 1.1. Tables were originally imagined to be just >that - a tool for presenting tabular data. Their subsequent adoption >by the user community as the basic method for page layout did not have >to be explicit in the design of either HTML or the browser, because >once its use was discovered and embraced, it no longer mattered what >tables were originally for, since they specified causes and effects And this is exactly the right indication that grid based layouts are much more useful than random positioning. Our eyes and minds demand order in what we see... ----- gr...@cy... (Cyte Technologies Inc) |