Autonomous-Agents is a research-focused repository that collects implementations, experiments, and academic resources related to autonomous multi-agent systems and intelligent robotics. The project explores how multiple agents can cooperate and interact with complex environments through machine learning, imitation learning, and multimodal sensing. It includes frameworks that integrate visual perception, tactile sensing, and spatial reasoning to guide the actions of robotic agents during manipulation or collaborative tasks. One of the central concepts explored in the repository is the integration of different sensory modalities using advanced machine learning techniques such as Feature-wise Linear Modulation and graph-based attention mechanisms. These methods allow agents to combine visual and geometric information while maintaining awareness of the spatial relationships between agents and objects.

Features

  • Multi-agent learning frameworks for cooperative robotic tasks
  • Integration of visual, tactile, and geometric sensor inputs
  • Graph attention mechanisms for spatial reasoning among agents
  • Diffusion-based action decoding for robotic manipulation
  • Adaptive attention models that adjust sensory weighting during tasks
  • Experimental implementations for autonomous learning and agent coordination

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Autonomous Agents

Autonomous Agents Web Site

Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform Icon
Gemini 3 and 200+ AI Models on One Platform

Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Autonomous Agents!

Additional Project Details

Registered

2026-03-09