Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster
Deep learning optimization library: makes distributed training easy
DoWhy is a Python library for causal inference
Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion
Train a 26M-parameter GPT from scratch in just 2h
Fast, flexible and easy to use probabilistic modelling in Python
Toloka-Kit is a Python library for working with Toloka API
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
Extensible, parallel implementations of t-SNE
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Interpretable prompting and models for NLP
JAX-based neural network library
Library for training machine learning models with privacy for data
JAX-based neural network library
MII makes low-latency and high-throughput inference possible
Alfred workflow using ChatGPT, DALL·E 2 and other models for chatting
Implementation of Recurrent Interface Network (RIN)
Renren Film and Television bot, fully connected to Renren resources
Models and examples built with TensorFlow
Operating LLMs in production
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
The unofficial python package that returns response of Google Bard
Useful extra functionality for TensorFlow 2.x maintained by SIG-addons
Superduper: Integrate AI models and machine learning workflows
Implementation of Phenaki Video, which uses Mask GIT