Audience
Developers interested in a 5 million token context window large language model
About LTM-1
Magic’s LTM-1 enables 50x larger context windows than transformers. Magic's trained a Large Language Model (LLM) that’s able to take in the gigantic amounts of context when generating suggestions. For our coding assistant, this means Magic can now see your entire repository of code.
Larger context windows can allow AI models to reference more explicit, factual information and their own action history. We hope to be able to utilize this research to improve reliability and coherence.
Popular Alternatives
Baichuan-13B
Baichuan-13B is an open source and commercially available large-scale language model containing 13 billion parameters developed by Baichuan Intelligent following Baichuan -7B . It has achieved the best results of the same size on authoritative Chinese and English benchmarks. This release contains two versions of pre-training ( Baichuan-13B-Base ) and alignment ( Baichuan-13B-Chat ).
Larger size, more data : Baichuan-13B further expands the number of parameters to 13 billion on the basis of Baichuan -7B , and trains 1.4 trillion tokens on high-quality corpus, which is 40% more than LLaMA-13B. It is currently open source The model with the largest amount of training data in the 13B size. Support Chinese and English bilingual, use ALiBi position code, context window length is 4096.
Learn more
DataGemma
DataGemma represents a pioneering effort by Google to enhance the accuracy and reliability of large language models (LLMs) when dealing with statistical and numerical data. Launched as a set of open models, DataGemma leverages Google's Data Commons, a vast repository of public statistical data—to ground its responses in real-world facts. This initiative employs two innovative approaches: Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG). The RIG method integrates real-time data checks during the generation process to ensure factual accuracy, while RAG retrieves relevant information before generating responses, thereby reducing the likelihood of AI hallucinations. By doing so, DataGemma aims to provide users with more trustworthy and factually grounded answers, marking a significant step towards mitigating the issue of misinformation in AI-generated content.
Learn more
LTM-2-mini
LTM-2-mini is a 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels.
For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B1 for a 100M token context window.
The contrast in memory requirements is even larger – running Llama 3.1 405B with a 100M token context requires 638 H100s per user just to store a single 100M token KV cache.2 In contrast, LTM requires a small fraction of a single H100’s HBM per user for the same context.
Learn more
CodeQwen
CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.
Learn more
Integrations
Company Information
Magic AI
Founded: 2022
United States
magic.dev/blog/ltm-1
You Might Also Like
Our Free Plans just got better! | Auth0 by Okta
You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
Product Details
Platforms Supported
SaaS
On-Premises
Training
Documentation