PygmalionAI
PygmalionAI is a community dedicated to creating open-source projects based on EleutherAI's GPT-J 6B and Meta's LLaMA models. In simple terms, Pygmalion makes AI fine-tuned for chatting and roleplaying purposes. The current actively supported Pygmalion AI model is the 7B variant, based on Meta AI's LLaMA model. With only 18GB (or less) VRAM required, Pygmalion offers better chat capability than much larger language models with relatively minimal resources. Our curated dataset of high-quality roleplaying data ensures that your bot will be the optimal RP partner. Both the model weights and the code used to train it are completely open-source, and you can modify/re-distribute it for whatever purpose you want. Language models, including Pygmalion, generally run on GPUs since they need access to fast memory and massive processing power in order to output coherent text at an acceptable speed.
Learn more
botx
Easily train intelligent AI agents with your data. Let them recognize and react to users' intent, trigger internal tools, or launch scripted dialogs with forms. Remove manual work and automate repeating activities with the power of LLMs such as responding, filling out forms and data, and so on. Let AI process your documents, create and draft new ones, grab important information, make evaluations, or make analyses with dozens of ready-to-use templates and examples. Pull in data through one of our 15 integrations, process it with best-suited models, or design scripted conversations to make sure you stay on track when most crucial. Build scripted dialogs that stay always the same. Effortlessly connect your models for a clear and intuitive visual representation. Create no code LLM chatbots, AI agents, workflows, or automation. Seamlessly integrate GPT-4 and a myriad of other robust 3rd party and open-source models with effortless ease.
Learn more
StableVicuna
StableVicuna is the first large-scale open source chatbot trained via reinforced learning from human feedback (RHLF). StableVicuna is a further instruction fine tuned and RLHF trained version of Vicuna v0 13b, which is an instruction fine tuned LLaMA 13b model.
In order to achieve StableVicuna’s strong performance, we utilize Vicuna as the base model and follow the typical three-stage RLHF pipeline outlined by Steinnon et al. and Ouyang et al. Concretely, we further train the base Vicuna model with supervised finetuning (SFT) using a mixture of three datasets:
OpenAssistant Conversations Dataset (OASST1), a human-generated, human-annotated assistant-style conversation corpus comprising 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
GPT4All Prompt Generations, a dataset of 437,605 prompts and responses generated by GPT-3.5 Turbo;
And Alpaca, a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003.
Learn more
ChatShape
We partenered with OpenAI to bring you ChatShape, an AI Chatbot builder chrome extension. It lets you create custom chatGPT Chatbots that know your data. Just open the extension on any sets of webpages that you want the bot to read, click "add current site" on each, then click "generate bot", and you'll get a shareable link of your own chatbot you can ask anything. It works on most pages tested. It even works on private wiki pages like Quip, Confluence, Jira, and Notion. ChatShape does not work with Google Docs. "Add Current Site" grabs all the visible, copyable text that is available on the current web page, this button does not crawl other links on the page or the entire domain. This feature will be coming separately soon.
Learn more