CAN Interface
This open source project proposes a CAN interface for embedded
applications. Although it meets a lot of demands of real applications,
it's still considered a suggestion only; at many points it can be tailored
or conceptually changed and a claim of this project is that its tools and
code elements will still be beneficial even in these cases. It's not a
fixed solution but a framework for modelling the best suitable solution in
a particular environment.
The offered solutions capture the parts of the CAN stack from the CAN
driver (not included) till the application interface. An interface for
sending and receiving CAN messages is required at the bottom and a
race-condition free, signal based application code interface is offered at
the top; the application interface can be data or function based and it
offers not only the signal values but also a detailed transmission status,
including bus status, timeout information and sequence counter and
checksum validation results.
The basic concept of the CAN interface is the dispatcher engine. It has a
number of event ports and it delivers all arriving events to the dedicated
handlers of the events. The processed events are typically CAN message
reception, bus on or off or message send acknowledge events. However, the
meaning of events is entirely transparent to the dispatcher. The handlers
are implemented in the application context and solely they define the
behavior of the application.
In a typical environment an event port will be connected to a system CAN
interrupt via a queue of sufficient size. Typical events are notification
of received messages, acknowledgment of sent messages or bus on/off
events. The main function of the dispatcher engine reads and dispatches
all events from the ports, which have arrived since the previous
invocation. The call of this function will usually be made from a regular
application task, thus not in the context of the notifying interrupt. The
main function delegates the events to the registered handlers. All of this
is done in the context of the application task and, hence, there are no
race-conditions. In this scenario, the (thread-safe implemented) queue
decouples the interrupt context from the application task.
Any number of dispatcher objects can be created. An addressed use case is
an architecture with more than one (regular) application task and a
dispatcher for each task. Each CAN message is connected to the appropriate
dispatcher. The association will be based on functional aspects and is
made by the application code, when initially registering the events. The
message will later be handled in the application task, which owns the
associated dispatcher. It's also possible to associate dispatchers with
physical CAN buses (normally CAN buses will share a dispatcher) or with
kinds of events, e.g., message reception events opposed to bus state
change events opposed to message send acknowledge events. All of this is
application design.
Each dispatcher engine is owned by a dispatcher system. Any number of
systems can be created. All dispatchers, which are owned by a given system
share the same memory sphere, i.e., they reside in the same memory space
(whatever that is on a given platform) and share the same memory access
attributes. The addressed use case are memory protected or managed
systems. In such a system we typically see separate processes, which must
not or can not access the memories of other processes. In such an
environment each process would create its own system. Given the CAN driver
exists in yet another memory sphere, owned by the operating system, then
the OS could safely send its events to the different processes. Of course,
this requires a connection object between event sender and dispatcher
event port capable of crossing the boundaries between memory spheres. The
implementation of the queue, which is part of the comFramework
distribution, supports this for the majority of operating systems and
MCUs. Where not, one has to add a dedicated, appropriate implementation of
the connection element.
Relating dispatcher systems and connection object to specific memory
spheres is managed with help of memory pools. This is explained in section
Memory management concept.
The application interface of the dispatcher is callback based. During
initialization, all later processed events are registered. The
registration associates the handle of the event with a callback. (The
handle is anything, which the sender may use to distinguish its events.)
At run-time, any received event is notified by invoking the registered
callback. From within this callback a set of API functions is available
that permits to query all needed information about the event. In the first
place the payload of a message can be fetched in case of a CAN message
reception event. The second element of this API is a couple of timer
functions. Timer objects can be created, started, stopped, re-triggered,
etc. and they will create another event at due time. Using timers, a
callback can easily implement timeouts for message reception or different
simple or complex send patterns for outbound messages.
For any integration of comFramework's CAN interface, the event handle
mentioned before has a high significance. Sometimes this handle simply is
the zero based index of the mailbox in the CAN driver or it is a
self-defined enumeration or it could be the CAN ID, among more. What
actually holds is defined by the platform. This weak predetermination
requires a flexible way of dealing with handles. comFramework defines
the interface of a map object, which can associate a handle with the
internally used representation of events (zero based indexes) and it comes
along with a number of implementations of this interface, which should
cover the necessities of nearly all true platforms. When integrating the
CAN interface, one only has to choose the appropriate implementation.
It's important to understand that the generic implementation of
comFramework ends with the invocation of the callback. It's a matter of
application design, what the callback does. This starts with the decision
if there is one shared callback for all messages or for all kinds of
events or if individual callbacks are used. In the former case large
switch cases and/or nested data structures will be applied and in the
latter case many different but similar functions will be needed.
The intended use case of our open source project is that all the code or
data structures, which sits between the dispatcher's API and the sigmal
oriented interface of the functional application code, and which depends
on the definitions of the messages in the network cluster, is generated by
the code generator, which is the core tool of this project. The code
generator parses the network specification from a set of CAN bus related
network database files (*.dbc) and can render this information in nearly
any textual representation - in particular as C source code.
We present a number of sample integrations of our CAN interface. They let
the code generator make the interface with the functional application code
and all the required callback code, which is needed to serve this
interface. The interface with functional application code is data oriented
(there's a global struct per message, whose fields hold the values of the
message's signals) and if desired it's a matter of only a few lines of
additional template code to shape a functional interface instead.
The samples know different message characteristics (strictly regular,
triggered by data change or data-change triggered plus periodic fallback
behavior) and all needed data flow code is generated in a fully automated
way; input are the network databases, which evidently have to accurately
specify all relevant message and signal properties.
The samples support sequence counter and checksum validation. These
elements are integrated into the callbacks and the complete control code
is generated (whereas e.g. the actual checksum computation is considered
an external function, which is just invoked by the generated code). Again,
this only holds if having or not having a checksum and/or sequence counter
and the kind of checksum and the counter's range, etc. is all properly
specified in the network databases.
The sample integrations are intended to demonstrate the capabilities of
the concept. They won't be usable out of the box in any other environment
but they will be a good starting point for customizing the offered
solution to the needs of that environment.
The main folder canInterface is split into the following components, found
as sub-folders:
code. This folder contains the implementation of the dispatcher system
and engine, the event senders, the connection elements, the handle maps
and the memory pools. The folder contains only a number of C source
files. There's neither a library nor a makefile; the intention is to
integrate this code into the aimed project as source files.
There's the template of a configuration header file, see
canInterface/code/ede_eventDispatcherEngine.config.h.template. This file
is copied to the target project, renamed to ede_eventDispatcherEngine.config.h
and the contained #define's are set according to the needs of target
platform and project (e.g. alignment or API tailoring)
With the exception of the configuration file, the code in folder
code is considered fixed, no modifications on this code need to be
done.
sampleIntegrations. This folder contains source code and build scripts
for several demo applications of the CAN interface. The reproduction of
their build requires a particular build tool chain; out of the box we use
Arduino 1.8 and MinGW GCC 8 for Windows.
winSampleIntegration. This is the first sample integration. It's a
Windows application, compilable with GCC. The needed effort to port
it to Linux or Mac OS will be insignificant since no operating
system specific services are used. Particularly, no multitasking is
involved. The different "application tasks" are executed strictly
sequential, there's no preemption.
The sample itself is structured in components. These components
again can be distinguished according to their usefulness and
re-usability for true applications of the CAN interface. See below
arduinoSampleIntegration and arduinoSampleIntegrationEmbeddedCoder.
The functionality of these sample integrations is nearly the same as
for winSampleIntegration but here we use a real hardware platform
and the application builds on an existing preemptive real time
operating system. You will see that the code generation templates
and the integration code are modified but only to a minor extend.
The second variant demonstrates how to model the APSW interface
with the MathWorks Embedded Coder. Auto-coded *.m scripts
configure the Embedded Coder such that the CAN interface is
represented as a non virtual bus in the model that implements the
APSW
winTestMT. This is a Windows application, compilable with GCC.
It's mainly intended for testing the dispatcher engine
implementation but it is designed such that it serves as an
instructive sample integration at the same time. The added value of
this sample is the different design of the callback implementation.
A more conventional concept with a few, reused and hand-coded
callbacks has been realized that operate on large constant data
tables, which are generated by the code generator and which contain
all relevant message and signal related information from the network
databases
Note, a complete runnable real-time application for MCU MPC5748G and
low-cost board DEVKIT-MPC5748G with CAN driver and integrated CAN
interface can be found at GitHub
The main purpose of the sample integrations is to demonstrate how the
combination of the given implementation of the dispatcher engine with the
network dependent, generated parts of the code can be managed. And how
this fits into a given platform environment and how it communicates with
the functional application software.
Summarizing, the files in component code can be reused out of the box in
most embedded environments. Just edit the configuration header file and
compile the sources with your project. The files from the best suiting
sample integration can be reused as starting point for your project.
Particularly the code generation templates will be an asset.
The source code of the this sample is separated in the following folders:
osSimulation. We present a running Windows console application. The
connection of the CAN interface to this application should resemble the
integration into a real embedded platform as much as possible. To meet
this demand this module defines a hypothetic Basic Software (BSW)
interface with regularly called tasks and CAN related interrupts.
The CAN interface can send and receive CAN messages: Sent messages are
written into a log file and received messages are purely simulated.
The simulation has a number of configurable random elements in order
to simulate irregular timing and other errors like checksum or sequence
errors.
If you consider to integrate the CAN interface into your embedded
platform then you would probably compare the structure of the hypothetic
BSW interface with your actual BSW in order to get a feeling to which
extend the sample will fit to your specific demands
integration. This folder contains those files, which are typically
needed in every environment to connect the CAN interface to the given
platform.
For this specific sample code integration means integration with the
Windows environment, too. Therefore, you will also find some trivial
code needed to setup a console application, beginning with the C main
function
codeGen. This folder contains all generated C code and the script
generateCode.cmd to reproduce the code generation. (Porting this
Windows script to Linux or Mac OS is trivial.) The sample uses a single
network database file only, located in the sub-folder dbcFiles.
First have a look at this network database then browse through the
generated C code to see how the network information was translated into
program code.
The templates that control the code generation are stored in sub-folder
templates. Once you understood the structure of the generated code you
will find it easy and straightforward to modify the templates according
to your needed modifications of the generated code. Likely, there won't
be many modifications and very likely no structural changes. The
templates are widely platform independent. Only the initialization, when
messages and buses are registered with the operating system will differ
APSW. This folder contains the functional code. The work assumption is
that this code is self-contained in that it builds on a data based API
not dealing with the mechanism which update the information in this API
or which connect this information with the outer world. This work
assumption relates to typical work environments, where the functional
code is made with tools like MathWorks MATLAB/Simulink or ETAS
ASCET.
The functional code is held as simple as possible, the functionality
is trivial. The only aim of its specification was to involve all the
different kinds of CAN messages, which should be supported, being regular,
data-change triggered or a mixture of both.
The only value of this code is that it demonstrates how application
code can link to the CAN transmitted data
logger. Out of interest. The simple logging mechanism the user interface
of the sample application builds on
environment. Out of interest. Some basic types are defined and some
GCC specific issues are tackled. Just needed to make the code compilable
The Arduino sample integrations barely differ from the Windows
integration. The main difference is that they are the better test case in
that they use real tasks with preemption. Moreover, an existing real time
operating system was applied so that the code generation process needed to
undergo the prove that it can be configured for a given BSW interface (as
far as no hand-written integration code is anyway in between).
The Arduino samples distribute the functional code across two regular
tasks of different cycle times. Above it has been said, that the concept
of the dispatcher engines shapes a race condition free programming
environment. This statement does of course no longer hold if different
application tasks access the data delivered by one and the same dispatcher
object. The Arduino code points to this fact and applies critical sections
where needed to sort this out.
The Embedded Coder variant of the Arduino sample implements the APSW as a
Simulink model. The Embedded Coder generates C code, which directly
interfaces with the CAN interface.
The files of this sample are explained in the readMe of the application.
The Arduino samples and sample winSampleIntegration let the code generator
produce individual handler functions for all the messages. Each message is
registered with its own dedicated handler. A number of six different kinds
of transmission patterns has been specified (regular, data-change
triggered and mixed, both for in- and outbound messages) and each message
will get a dedicated instance of one of the related six handlers. The data
needed by the handlers is defined as far as possible local to the
functions. We don't see large data structures but only a lot of similar
functions.
Pros and cons:
The trade-off between memory and speed supports the execution speed. This
architecture means fast code since all data is directly addressable. It
consumes however a lot of ROM space due to the manifold duplication of the
handler code. There's no particular expense in terms of RAM.
The main advantage of the architecture is the ease of maintenance: As a
matter of fact the handlers are developed not in C but in template code,
which means C syntax intermingled with the syntax constructs of the
template language. From the syntactical point of view is writing these
templates less convenient and more error prone than directly writing C but
on the other hand is the developed code structurally much simpler and this
counts more. Only a single instance of each handler needs to be developed
and its implementation is most easy because it doesn't deal with complex,
nested (external) data structures. It can make use of simple local
function variables and some flat static variables. Instantiation and
addressing of individual data objects is not a matter of arrays and nested
structs plus related pointer and address operations as in C but left to
the template expansion procedures.
A more typical architecture would have only a single instance of the
handler code. The only handler would be made reentrant and it would
operate on one or more data tables, which have many individual entries,
which are dedicated to the different messages. An element from such a data
table would be a struct with many fields that describe all the properties
of a message: as simple as the CAN ID and as complex as signal layout
patterns of contained checksums or sequence counters. The handler code
would first fetch the right, message related entry from the table and then
have a lot of conditional code, which is controlled by the contents of
that entry.
An implementation of this architecture is presented by the sample
integration winTestMT.
The trade-off between ROM consumption and execution speed differs. Much
less ROM and an about 40% higher execution time due to the now needed
conditional code and more complex data addressing. The RAM consumption
will not differ as the chosen algorithms and hence the needed state
information are not affected by the choice of the architecture.
This more conventional architecture is fully supported by our interface
and code generation concept. The dispatcher engines don't care at all
how many callbacks are present; whether it are different callbacks or a
few or always the only one. The handler code is similar in structure but
will now need array indexing and/or pointer operations. More important, it
becomes one-time hand-coded C code; since it now is message independent
there's no longer any reason to generate it from a network database.
Instead, you will let the code generator generate the large data table
holding all the details about buses, messages and signals. The data is
constant by nature and the complete table can be placed in ROM.
The handler code can be derived from the samples. Take an arbitrary
instance of the different kinds of handlers, copy them to the hand-coded
part of your project and replace the access to local data elements
(referenced by name only) with indexed access of the data table. The root
of all data access will be an index into the data table. All messages get a
linear, null based index during the registration process and this index
can be fetched from within the handler by means of the available API.
If you have distinct handlers for all the supported transmission patterns
then you can still let the code generator generate the code which
registers the messages with the appropriate handler. New templates for the
generation of the data table will have to be written from scratch. The
effort for this is little.
If the specification of the network databases introduces more transmission
pattern then the templates will need according modification. One could
imagine bursts of messages (a due message is send a configurable number of
times for sake of redundancy) or sending on demand (by invocation of a
dedicated trigger function), etc. Most of this will be straightforward
refinement of the existing template code.
Particularly for small systems (Arduino!) it's not necessarily the best
choice to integrate the dispatcher engine code. A shared buffer and simple
global suspend/resume interrupt functions are often sufficient to safely handle
the CAN data flow. What remains in every environment is the need for the
pack and unpack functions to link the signal based functional code to the
message based hardware drivers.
In such an environment the code generator can be applied to generate the
needed shared buffers, the pack and unpack functions and the API. The API
can be modelled as global data structures as in our sample integrations or
as functional API.
The trade-off between ROM consumption and CPU load can be shifted further
in the direction of less ROM and higher execution time by not generating
individual pack and unpack functions for each message. Instead, a data table
can be designed, which holds an entry per message. Such an entry will hold a
list of signal descriptors with byte order and bit layout and scaling
information. A one-time hand-coded pack and unpack function pair would now
operate on this table.
A data based API becomes unlikely in such an architecture as the pack and
unpack functions will probably be signal related without the possibility of
using the best fitting data type for a signal; they could take message and
signal index as input arguments. The code generator would generate
meaningful enumerations for message and signal indexes or it would hide all
of this in signal related preprocessor macros.
If the application code is made with a graphical data flow model based
system like MathWorks Simulink (with dSPACE TargetLink or MathWorks
Embedded Coder) or ETAS ASCET then a machine readable specification of the
API is required for import into these application code generators. Our
code generator can generate such a specification (typically an XML or
MATLAB M script file). This is just the same for any imaginable
architecture and according templates are provided with our sample
integrations.
The code generator is controlled by command line. If this is combined with
the code generation scripts of such a model based design tool and a
makefile based compilation script then even a low budget platform like
Arduino can get much of the characteristics of a rapid prototyping
system.
A dispatcher system owns a number of dispatcher engines. Each engine
registers the events it's going to process and it owns these events. The
event dispatching process is run by regularly invoking the dispatcher's
main function and each dispatcher's main function may be called from
another execution context. Use case is having a dedicated dispatcher for
each application task, which may require event dispatching.
This concept makes the entire dispatching and callback processing
race-condition free inside the set of events owned by a dispatcher.
Evidently, the user added callback functions need to be reentrant unless
the architecture with distinct callbacks for all events is chosen. Writing
reentrant handlers is straight forward as a handler will always operate on
data, which directly belongs to the processed event.
The sender of events will usually belong to another context as the one
invoking the dispatcher's main function. As of writing, the comFramework
distribution contains only a single implementation of a connection object
between sender and dispatcher, the multi-tasking and multi-core safe event
queue.
Some prerequisites for using the queue from the distribution follow. If
they were not fulfilled on the targeted platform then one would need to
implement a dedicated, appropriate connection object. A connection object
implements the sender and receiver port interfaces,
ede_eventSenderPort_t and ede_eventReceiverPort_t, and it is supposed
to provide the events at the latter port, which had been posted before at
the former one. Here are the constraints of using the queue:
If these prerequisites are fulfilled then the complete event processing
code can be designed and implemented race condition free and data
integrity is granted.
The objects of the CAN interface are allocated in one or more memory
pools. For good reasons, dynamic memory management is unwanted in embedded
platforms. Therefore, a safe and most simple heap design has been chosen
for the implementation of the memory pool, which is part of the
distribution of comFramework. The implementation is compliant with usual
requirements for embedded software.
The memory pool has a fixed-size chunk of RAM, which is defined at pool
creation time by the client code (the embedding integration code). For
this purpose, it'll probably use a static array of the desired number of
bytes and with appropriate memory access attributes. Memory allocation
starts at the lowest address of the chunk. Each memory request from the
CAN interface is served by taking the next number of bytes from the chunk.
An error message is returned if the pool is exhausted and there's no
method to return allocated memory to the pool. Fragmentation is excluded
by principle.
Because the memory pool gets the initial chunk of memory from the client
code, which naturally has full awareness of all platform particularities,
the implementation of the pool can be held clear of all platform dependent
constructs, which may be needed to control or define memory access
attributes (e.g., type decorations, pragmas or platform specific heap
APIs.)
A memory pool offers a diagnostic API, which can return the number of used
and still unused bytes of RAM. Any reasonable application of the CAN
interface has an upper bounds of memory consumption; if the diagnostic
interface is queried for the used amount of RAM then the pool size can be
set to exactly this number of bytes (i.e., in the next compilation of the
code) and there's no risk that the pool will ever be exhausted.
The core idea is the upper bounds of memory consumption. This relates to
the fact that there must be no dependency of RAM consumption on run-time
data or events. This is given for all usual applications and easy to
prove. The RAM consumption depends on the number of registered messages,
the number of dispatcher engines, the size of the events, the number of
timer objects and a few configuration details at compile time. All of this
is determined in a static way by the source code but not affected by
run-time effects. If the network database changes, if code is regenerated
the memory consumption will change but only then. Our concept is to use
the memory pool diagnostics in DEBUG compilation to report the actual
consumption and to make the pool size accurately fit in the production
code. Of course, the accurate settings need to undergo an adaptation after
each code regeneration. (Very sophistically elaborated code generation
templates could even anticipate the required code size as they have all
the knowledge about number of messages etc.) One has to understand that
memory allocation errors can be definitely excluded for all the future
once the compiled software has passed the very first run-time test after
code regeneration and build.
One exception needs to be mentioned. The timer API knows destruction of
timer objects. As said, there's no concept of freeing memory for reuse.
Instead, the timer implementation has a list of destroyed objects. This
list is inspected first, when a(nother) timer object is created. If
there's a (destroyed) object in the list, then it is now resuscitated
rather then allocating new memory in the pool. This is a bit of dynamic
memory allocation. However, even if event handlers make use of dynamic
creation and destruction of timer objects then there's in far the most
cases still an upper bounds for the memory consumption but it now becomes
data and event dependent when this limit will be reached. The matter is:
When can we safely query the diagnostic API to get the maximum memory
consumption. If testing is too short then we will not get the absolute
maximum yet.
To avoid this problem, the timer API has been designed such that dynamic
timer creation and deletion can be completely avoided. Instead, a timer
can be suspended and re-triggered by the application code; the object as
such is permanently existing. All ever needed timer objects are created in
the initialization phase but at run-time no single byte will be allocated
in the pool any more. The diagnostic API can be safely queried for the
total memory consumption already at the end of the code initialization.
The implementation of the memory pool depends on the common machine
alignment. This has to be configured at compile time in the configuration
header file of the CAN interface.
The constructors of the different objects ask for a memory pool to use. In
a simple system one would typically create one large pool object and use
it for all the constructor calls.
In systems, where the sender of events resides in another sphere of memory
than the receiver, one would use at least two pools, one from both the
memory spheres, the sender's and the receiver's. Here's a typical example:
The CAN reception events origin from inside the operating system and they
are consumed by a dispatcher running in an application task. The queue
from the distribution should be used for delivering the events. OS and
application have different memories attributes. Two memory pools are
instantiated. The first one allows full read/write access to the OS. The
application only gets read access. The other pool grants read/write access
to the application. The queue constructor takes the OS pool for the sender
side and the application pool for the receiver side and it is guaranteed
that the application code won't ever be able to harm the OS memories or to
make the OS code fail or stuck, e.g., by corrupting some link pointers in
the queue implementation.
The constructor of a dispatcher system takes a memory pool, which is
unavoidably used to also create all the dispatcher engines of that system.
Therefore, different dispatchers owned by a system will always reside in
the same memory sphere. In a memory protected system it'll therefore be
impossible to run two dispatchers from the same system in different
processes or from different cores. (And because of cache behavior also
not from different cores in a not memory protected system.)
All dispatcher engines of a system share the same memory pool. If dynamic
timer creation and deletion is used -- opposed to our recommendation, see
above -- by dispatchers, which are run in different execution contexts,
then a mutual exclusion of these contexts is required by the
implementation of the applied pool. No such code is required otherwise as
all memory allocation is completed already in the anyway single-threaded
initialization time.
The implementation from the comFramework distribution offers mutual
exclusion of memory requesting callers. Enabling it for the pool is an
argument of the constructor. Entering and leaving a critical section of
code is done via a publicly defined interface and the integrating code
needs to provide an implementation of that interface, which is useful on
the targeted platform.
A memory pool implements a given public interface, and so it's straight
forward to implement other, dedicated pools if the given platform's
particularities would for some reason not allow to use the before
described implementation from the distribution.
The error handling concept makes the distinction between run-time problems
and implementation/compile-time problems. The former are handled by return
value of the affected function, whereas the latter are mainly caught by
assertions.
Run-time problems are those, which might appear because of external data
or events. Only few possible error states in the dispatcher engine are of
this kind. The most prominent example is the queue overrun. This event is
reported by return value to the event posting context and is available to
the functional application code by a (naturally asynchronous) diagnostic
API. Data repair is impossible. This failure is prototypic in that
Far the most error states are of kind implementation/compile time error.
These errors appear only because of the configuration of the source code
but do not depend on outer world data or events. They will unconditionally
occur as soon as the according program path is taken. The most prominent
example is the memory allocation error. As explained above this error will
not occur because of data or timing effects, it's either in the software
at compile time or not. Errors of this characteristics can be caught most
efficiently by assertions; there must be no CPU consuming run-time test
code in the production code for this kind of errors.
Using run-time assertions makes it unavoidable to have an initial testing
phase in DEBUG compilation. If the software design doesn't make use of
dynamic timer creation and deletion (see section Memory management
concept) then this test phase could even be quite short; nearly all of
these errors and surely the memory allocation errors would be reported
immediately in the system initialization phase.
Nobody cares what to use callbacks and timers for. If our samples use them
to implement timeout recognition for inbound CAN messages and send time patterns
for outbound messages then this only documents the intended use case of
these elements.
Many embedded applications use state machines as fundamental design
pattern. The functionality is split in many distinct, small operations,
which are scheduled dependent on the current state and external data and
events. Timing events typically play an important role in embedded
applications. The implementation of an application as a single-threaded
state machine shares some characteristics with applications designed for
preemptive multi-tasking. In the first place can they show a
pseudo-parallel behavior like preemptive and non-preemptive operating
systems.
It's well possible to use our dispatcher engine to not only control the
upper layers of the CAN communication but to host such an application at
the same time. A portion of callbacks and timers would be used just as
shown in our samples but more callbacks and timers can control the "tasks"
of the functional application code. Furthermore, the dispatched events
must not necessarily be CAN related but may origin from other hardware
devices. All of this running in a single-threaded, RTOS free platform can
become a cheap, easy, less error-prone (entirely race-condition free) and
sufficient solution particularly on small platforms.
The code of the dispatcher engine as it is today is not particularly
intended for small systems; the chosen data structure support fast
execution rather than little RAM consumption. We provide an Arduino
integration but the 8k of RAM of an Arduino board will soon become a
limiting factor.
A suitable customization decision for small systems with little RAM can
be to not use the dispatcher engine and to only benefit from the code
generator. This has been discussed above.
Another idea is to optimize the dispatcher engine code for less RAM
consumption. Some obvious things can be done easily.
The implementation makes use of the C concept of a somewhat undetermined
integer. It uses int and unsigned int where the actual range of the
integers doesn't really matter so that the compiler will chose the native
word size of the underlaying machine. This aims at portability and optimal
execution time performance. However, in many cases where int is used the
use of even a one Byte integer would suffice - it's often about the number
of buses and CAN messages and serving 250 CAN messages is a lot. By
carefully replacing all appearances of int with a more specific type
like uint8_t from stdint.h a significant amount of RAM can be saved.
Conditional preprocessor code as well as compile- and run-time assertions
can be applied to avoid unrecognized overruns of the limited integers.
The other obvious potential for saving RAM on cost of execution speed is
the substitution of pointers with indexes. Needed cross references between
objects are implemented by fast, direct pointing. Most of these code
locations can be changed just like that towards the use of indexes;
messages and buses are already held in arrays.
Timer objects are allocated in the memory pool; here a structural change
would be required to introduce indexes. One would allocate a fixed table
of timers. The application code could state the table size as it already
is for messages and buses. This change would shape the demand for a
diagnostic API to get the maximum table utilization in order to make the
table size manageable.
In most cases a one Byte integer will suffice for the indexes, which again
leads to a significant RAM saving, although on cost of execution speed.
Saving ROM on cost of execution speed and maintainability can be achieved
by alternative designs of the callback code, this has been discussed
above.
The CAN interface engine is a C implementation. The implementation is
Doxygen documented. The Doxygen pages are most useful, when implementing
the project specific code of the CAN interface, i.e., initialization,
instantiation of the required dispatchers and the callbacks to handle the
notifications from the dispatchers. The Doxygen pages can be found at
https://svn.code.sf.net/p/comframe/code/canInterface/trunk/doc/doxygen/html/index.html
or in your local installation at
comFramework/canInterface/doc/doxygen/html/index.html