Open Source Computer Vision Library
The Open Source Computer Vision Library has >2500 algorithms, extensive documentation and sample code for real-time computer vision. It works on Windows, Linux, Mac OS X, Android and iOS. Homepage: opencv.org Q&A forum: answers.opencv.org Documentation: docs.opencv.org Please pay special attention to our tutorials! http://docs.opencv.org/3.2.0/d9/df8/tutorial_root.html Books about the OpenCV are described here: http://opencv.org/books.html
reacTIVision is a computer vision framework for the fast and robust tracking of markers attached on physical objects, and the creation of multi-touch surfaces. It was designed for the rapid development of table-based tangible user interfaces.
Marvin is an image processing framework that provides features for image and video frame manipulation, multithreading image processing, image filtering and analysis, unit testing, performance analysis and addition of new features via plug-in.
GSVideo is a cross-platform library for the Processing programming language that provides video support (movie playback, video capture, creation of movie files) through the use of the GStreamer multimedia framework.
Why Ogena Editor/maker for video/film is useful
Ogena All-in-one video/film editor or maker makes sense as simple to use app for video/film or movie editing and animations/slideshows. It gets most of your imagination of an All-in-one video/film Animation editor / maker App. Video editing is made intuitive and simple. Lots of Special Effects ( > 55) ! Formats supported are mp4, avi, flv, mov, flv and more. Description given image animation which seeks audio given a description. Visualization (Slideshow / Animation) of Streams (Radio: asx, pls, m3u etc.) and News Feeds (RSS, XML) This Program is protected by 1st abstract & reg. International and root sec. laws as is comes free of use like that. Give your answers alike vaks@ccwf[.]cc[.]utexas[.]edu about Ani Magix or issue your activation phase of performer@df(.)lth.s(e)
Scene is a computer vision framework that performs background subtraction and object tracking, using two traditional algorithms and three more recent algorithms based on neural networks and fuzzy classification rules. For each detected object, Scene sends TUIO messages to one or several client applications. The present release features GPU accelerated versions of all the background subtraction methods and morphological post processing of the object blobs with dilation and erosion filters, implemented in OpenCL. The framework was mainly designed as a toolkit for the rapid development of interactive art projects that explore dynamics of complex environments. The Scene GUI runs and compiles under Windows, Linux, and MacOS X, and is available in both 32 bit and 64 bit versions.
LED Matrix controller software
PixelController – a matrix control project by Michael Vogt , (c) 2010-2013. The main goal of this application is to create an easy to use matrix controller software which creates stunning visuals! Keywords: Arduino, Teensy, OSC, MIDI, PureData, Visuals, RealTime
The open source, multimodal interactive "Sensitive Artificial Listener" dialogue system created by the EU project SEMAINE.
LEA is a lightweight eyetracking algorithm library (hence the name) written in and for Java. LEA is able to track eye movements with an ordinary webcam and returns relative movements (up, down, ...). It is also able to compensate for slight head motions.
simple algorithm for a realtime interactive visual cortex for painting
A paint program where the canvas is the visual cortex of a simple kind of artificial intelligence. You paint with the mouse into its dreams and it responds by changing what you painted gradually. There will also be an API for using it with other programs as a general high-dimensional space. Each pixel's brightness is its own dimension. Bayesian nodes have exactly 3 childs because that is all thats needed to do NAND in a fuzzy way as Bayes' Rule which is NAND at certain extremes. NAND can be used to create any logical system. In this early version, I'm still working on edge detection and its understanding of the same shapes at different brightnesses. This will be a module of the bigger Human AI Net project and will be used for adding realtime intuitive high dimensional intelligence in audio and visual interactions with the user.
nemosomen is a framework for designing open source, network based, multimedia (video/openGL/sound/MIDI) realtime toys a suite of tools for distributed development, processing of media. It tries to shift most of the working process in realtime processes.
3i is an open source augmented reality system built on top of the Open Handset Alliance's "Android" smartphone platform.
ANDIAMO (ANimador DIgital Analógico MOdular = Digital-Analog Modular Animator) is a customized software tool for live audiovisual performance. It's main goal is to integrate video (live and pre-recorded) with hand drawing (entered via a graphics tablet)
Sphere surface layers of visual cortex approach maximum info density
Near the surface (even horizon) of a black hole, there is maximum information density in units of squared plancks (and some translation to qubits). Similarly, our imagination is the set of all possible things we can draw onto our most dense layer of visual cortex in electricity patterns. Bigger layers have more neurons to handle those possibilities. A Black Hole Cortex is a kind of visual cortex that has density of neuron layers similar to density at various radius from a black hole. What we think our eyes see, the imagination, is the densest and smallest layer. SphereSurfaces outside it recursively have more neurons, more surface area, but less density since it has to eventually dimension-reduce to high level ideas, like there are 10000 Wikipedia page names that cover most parts of the world. We can think of Wikipedia as a layer above our brains, a global SphereSurface of large surface area (a cortex layered on billions of minds) and small (10000 most important pages) density.
Sensors to MIDI management system. This is a project that allows for creating MIDI output from sensors input. This output can then be used in MIDI processors, like GarageBand, Resolum and other real-time multimedia software. The input can be taken from d
With JVideoViaWeb you can get access to the digital, web cameras via Web through the brouser.
Java API for tracking moving objects in live or pre-recorded video. Comes with a test application with adjustable motion-tracking parameters. Download maintained at http://tech.joelbecker.net/computervision/6-videomotiontracking2 .
The project aims to provide a Java library that enables OpenGL animation and rendering of mandelbrot fractals using LWJGL.
Argo-Panoptes is a distributed surveillance system that allows video remote control using a smartphone or web browser as user interface. The system is able to alert the user about failures or motion detection by email, SMS or MMS.
Sabbia acquires a video stream from a webcam and transforms the movements detected into MIDI messages. This way it is possible to control a MIDI device or a MIDI software by hand movements.
A graphical toolkit to use the GPU (Graphical Processor) to perform general purpose operations.
Visage is a human computer interface that aims to replace the traditional mouse with the face. Using a webcam and Visage the movement of the face becomes the movement of the mouse pointer. Left/right Eye blinks fire left/right mouse click events.
This project aims to mirror files related to open source development for Digital TV (ISDB-T International) in Argentina, including Ginga-J, and related open source development tools.
A free, java-based wearable computer and augmented reality system. Designed to be constructed out of common off-the-shelf items.
Java based video capture software for the inexpensive (~$80) Hawking Technologies HNC230G Wireless-G Network camera. Motion detection and archiving of captured still images supported. Catch 'em in the act!