Distribute and run LLMs with a single file
Framework and no-code GUI for fine-tuning LLMs
File Parser optimised for LLM Ingestion with no loss
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Low-code framework for building custom LLMs, neural networks
AI-powered CLI git wrapper, boilerplate code generator, chat history
Open-source, high-performance Mixture-of-Experts large language model
Experimental search engine for conversational AI such as parl.ai