Brief overview
CEBRA — short for Learnable Latent Embeddings for Joint Behavioural and Neural Analysis — is a modern machine-learning framework created to link observed behaviors with neural activity. It produces compact latent representations that reveal relationships between actions and brain signals, making it useful for answering key questions in systems neuroscience.
Core capabilities
- Reconstructs visual experiences and decodes natural movies from recordings of the visual cortex.
- Integrates both calcium-imaging and electrophysiology datasets as inputs.
- Learns high-quality latent spaces that jointly represent behavior and neural activity.
- Handles data collected within a single session as well as experiments spread over multiple sessions.
Distinctive features
- Kinematic-feature analysis to tie movement variables to neural patterns.
- Rapid decoding pipelines for near-real-time readouts from neural embeddings.
- Spatial mapping tools that expose where in the latent space different behaviors and stimuli lie.
Typical use cases
CEBRA is well suited for researchers studying how behavior maps onto neural dynamics, for building decoders of sensory input from brain recordings, and for combining behavioral tracking with large-scale neural measurements. It supports experiment types ranging from sensory decoding and motor analyses to multi-day longitudinal recordings.
Alternative option
If your primary need is video enhancement rather than neural representation, consider a dedicated tool such as VMake Video Enhancer (subscription-based) for improving visual quality and reconstruction of videos.
Technical
- Web App
- Full