A real time inference engine for temporal logical specifications, which is able to acquire, process and generate any binary or real signal through POSIX IPC, files or UNIX sockets. Specifications of signals and dynamic systems are represented as special graphs and executed in real time, with a predictable sampling time of few milliseconds. Real time signal processing, dynamic system control, state machine modeling and logical property verification are some fields of application of this software. The accepted language provides timed logic and mathematical operators, conditional operators, interval operators, bounded quantifiers and parametrization of signals.
Features
- Fast Inference Engine
- Real Time or Batch Run
- Temporal Logical Networks
- Specification Execution
- Optional Multithreading
- Communication through Linux IPC, Files or Sockets
- Sampling Time of Few Milliseconds
- Compiler Included
- Graphical Shell Included
License
Affero GNU Public LicenseFollow Temporal Inference Engine
nel_h2
Build Securely on Azure with Proven Frameworks
Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
Rate This Project
Login To Rate This Project
User Reviews
-
Unique in its genre, this inference engine provides low level parallelism in real time when executing temporal logic specifications. Not too easy to configure and interface, it anyway includes a graphical shell. Its temporal logic compiler is essential in diagnostic but flexible and optimizing. Communication through message ports is very fast but its optimization seems to be impossible for flaws of the POSIX IPC library. Automatic scheduling of timed actions inferred from the logic specification without explicit programming is an effective way to reproduce the behaviour of binary real time systems. Currently, the most apparent limit in this approach is the amount of memory used by the inference engine when referring to very large intervals.