The Gaze Tracking Library is a framework for open source eye tracking using off-the-shelf components, such as webcams and videocameras. It supports both head-mounted and remote setups. The network API relies on TCP/IP and UDP (.Net client included)
This is a simple BACnet Browser that currently only sends a Who-Is message and builds a tree of Devices when it receives an I-Am message. It then allows a list of diagnostic routines to be run.
the intelligent predictive text entry platform
Presage (formerly Soothsayer) is an intelligent predictive text entry system. Presage generates predictions by modelling natural language as a combination of redundant information sources. Presage computes probabilities for words which are most likely to be entered next by merging predictions generated by the different predictive algorithms. Presage's modular and extensible architecture allows its language model to be extended and customized to utilize statistical, syntactic, and semantic predictive algorithms. Presage's predictive capabilities are implemented by predictive plugins. Predictive plugins use services provided by the platform to implement multiple prediction techniques.
Haytham is an open source gaze tracker
Haytham is an open source video based eye tracker suited for head-mounted or remote setups. It provides real-time gaze estimation in the user’s field of view or the computer display by analyzing eye movement. Haytham offers gaze-based interaction with computer screens in fully mobile situations. The software is built by C#, using Emgu and AForge image processing libraries. LICENSE The Haytham is released under a dual license: FREE VERSION: The source code is released under GPL version 3.0. In short, it means that any distributed project that includes or links any portion of Haytham source code has to be released with the source code under a license compatible with GPLv3. COMMERCIAL VERSION: If you want to use the Haytham in a closed source commercial product, you must purchase the license. Please contact the contract adviser at IT University of Copenhagen(email@example.com).
Blaze is an application launcher that distinguishes from amongst the others by being able to automate recurrent tasks performed in the file-system or even any application on Microsoft Windows.
Full Body Interaction Framework
FUBI is a framework for full body interaction using a depth sensor such as the Microsoft Kinect with OpenNI/NiTE or the Kinect SDK. It further supports the Leap Motion Controller. FUBI is written in C++ and includes a C#-Wrapper. Releases are tested on Windows 8.1, but there also exist Linux Code::Blocks project files. Fubi's main functionality is gesture and posture recognition according to four gesture categories: 1. Static postures: Configuration of several joints (positions or orientations). 2. Linear/Angular movements: Linear movement of joints with specific direction and speed or angular movement around an axis. 3. Combination of postures and movements: Combines sets of 1 and 2 in a sequence of states with specific time constraints. 4. Symbolic gestures: Gesture with complex shape that are defined by recorded sample data. If you use Fubi in a scientific project, please cite one of the related publications mentioned on the project website.
AIML Verbot Converter - Converts AIML files to Verbot KnowledgeBase (VKB) files. Visit http://www.verbots.com/ and http://www.alicebot.org to learn more about AIML and Verbots. Implemented in C# (.NET).
EBBA is a project aiming to develop an advanced chatbot by combining AIML, 3d facial expressions, speech synthesizer, speech recognition and an iq-test solving functionality.
Bi-gram applications based on language models produced by SRILM from Chinese Wikipedia corpus, include Chinese word segmenter, word-based (not character-based) Traditional-Simplified Chinese converter and Chinese syllable-to-word converter.
This library written in C# provides access to USB HID devices from the .NET Environment. It uses the system-supplied functions in hid.dll. Features include sending/receiving input/output/feature reports and detection of plugging/unplugging.
A Multimodal approach to control a Unmanned Aerial Vehicle
Welcome to Multimodal Unmanned Aerial Vehicle Controller (MMAR) This project intends to develop a multimodal interface which gives a human operator capability of controlling a QuadRotor UAV efficiently and more easily. During this project we use a simulated QuadRotor called AirRobot in a simulation environment for development of robots, USARSim. There are some aspects involved in this project including touch, pose tracking, gesture recognition, voice recognition, etc and we try to utilize capabilities of some devices available today such as Android Phones and Microsoft Kinect. Using a windows appliacation called MMUAVC Fusion App, this system gathers all data from sensors and issues control commands for the QuadRotor
This project aims to give a demonstrative application on how to implement a multi mouse user interface with the help of Microsoft Multimouse SDK. The program runs a very simple Tic-Tac-Toe game for two players with two mice. ( .NET Framework 3.0 required
A C# implementation of some simple tabletop applications that showcase the functionality of a multi-touch display. The applications support multiple simultaneous users and demo the possible interactions for collaboration and play on a multi-touch surface
Remote Touchpad is a software, what gives your touch-screen mobile phone ability to act as wireless touchpad and keyboard for your computer. In addition it is possible to run in presentation remote control mode. Everything you need is a bluetooth module.
Home Automation System for A Typical Apartment
SAMI is an extensible, voice-controlled home automation system which seamlessly controls all of the pieces of your apartment or house, without installation or monthly fees. For more information about how to use her, see the documentation tab!
Environment for visual design and deployment of smart house
Project moved to http://x13home.github.io/