Alternatives to ESMFold

Compare ESMFold alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to ESMFold in 2024. Compare features, ratings, user reviews, pricing, and more from ESMFold competitors and alternatives in order to make an informed decision for your business.

  • 1
    GPT-4o

    GPT-4o

    OpenAI

    GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time (opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
    Starting Price: $5.00 / 1M tokens
  • 2
    Cohere

    Cohere

    Cohere AI

    Build natural language understanding and generation into your product with a few lines of code. The Cohere API provides access to models that read billions of web pages and learn to understand the meaning, sentiment, and intent of the words we use. Use the Cohere API to write human-like text by completing a prompt or filling in blanks. You can write copy, generate code, summarize text, and more. Compute the likelihood of text and retrieve representations from the model. Use the likelihood API to filter text based on chosen categories or selected criteria. With representations, you can train your own downstream models on a wide variety of domain-specific natural language tasks. The Cohere API can compute the similarity between pieces of text, and make categorical predictions by comparing the likelihood of different text options. The model has multiple lenses through which to view ideas, so that it can recognize abstract similarities between concepts as distinct as DNA and computers.
    Starting Price: $0.40 / 1M Tokens
  • 3
    Gopher

    Gopher

    DeepMind

    Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans. As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text – have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose.
  • 4
    GPT-3.5

    GPT-3.5

    OpenAI

    GPT-3.5 is the next evolution of GPT 3 large language model from OpenAI. GPT-3.5 models can understand and generate natural language. We offer four main models with different levels of power suitable for different tasks. The main GPT-3.5 models are meant to be used with the text completion endpoint. We also offer models that are specifically meant to be used with other endpoints. Davinci is the most capable model family and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more per API call and is not as fast as the other models.
    Starting Price: $0.0200 per 1000 tokens
  • 5
    Gemini Ultra
    Gemini Ultra is a powerful new language model from Google DeepMind. It is the largest and most capable model in the Gemini family, which also includes Gemini Pro and Gemini Nano. Gemini Ultra is designed for highly complex tasks, such as natural language processing, machine translation, and code generation. It is also the first language model to outperform human experts on the Massive Multitask Language Understanding (MMLU) test, obtaining a score of 90%.
  • 6
    PanGu-Σ

    PanGu-Σ

    Huawei

    Significant advancements in the field of natural language processing, understanding, and generation have been achieved through the expansion of large language models. This study introduces a system which utilizes Ascend 910 AI processors and the MindSpore framework to train a language model with over a trillion parameters, specifically 1.085T, named PanGu-{\Sigma}. This model, which builds upon the foundation laid by PanGu-{\alpha}, takes the traditionally dense Transformer model and transforms it into a sparse one using a concept known as Random Routed Experts (RRE). The model was efficiently trained on a dataset of 329 billion tokens using a technique called Expert Computation and Storage Separation (ECSS), leading to a 6.3-fold increase in training throughput via heterogeneous computing. Experimentation indicates that PanGu-{\Sigma} sets a new standard in zero-shot learning for various downstream Chinese NLP tasks.
  • 7
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
    Starting Price: Free
  • 8
    ChatGPT

    ChatGPT

    OpenAI

    ChatGPT is a language model developed by OpenAI. It has been trained on a diverse range of internet text, allowing it to generate human-like responses to a variety of prompts. ChatGPT can be used for various natural language processing tasks, such as question answering, conversation, and text generation. ChatGPT is a pre-trained language model that uses deep learning algorithms to generate text. It was trained on a large corpus of text data, allowing it to generate human-like responses to a wide range of prompts. The model has a transformer architecture, which has been shown to be effective in many NLP tasks. In addition to generating text, ChatGPT can also be fine-tuned for specific NLP tasks such as question answering, text classification, and language translation. This allows developers to build powerful NLP applications that can perform specific tasks more accurately. ChatGPT can also process and generate code.
  • 9
    Giga ML

    Giga ML

    Giga ML

    We just launched X1 large series of Models. Giga ML's most powerful model is available for pre-training and fine-tuning with on-prem deployment. Since we are Open AI compatible, your existing integrations with long chain, llama-index, and all others work seamlessly. You can continue pre-training of LLM's with domain-specific data books or docs or company docs. The world of large language models (LLMs) rapidly expanding, offering unprecedented opportunities for natural language processing across various domains. However, some critical challenges have remained unaddressed. At Giga ML, we proudly introduce the X1 Large 32k model, a pioneering on-premise LLM solution that addresses these critical issues.
  • 10
    GPT-4

    GPT-4

    OpenAI

    GPT-4 (Generative Pre-trained Transformer 4) is a large-scale unsupervised language model, yet to be released by OpenAI. GPT-4 is the successor to GPT-3 and part of the GPT-n series of natural language processing models, and was trained on a dataset of 45TB of text to produce human-like text generation and understanding capabilities. Unlike most other NLP models, GPT-4 does not require additional training data for specific tasks. Instead, it can generate text or answer questions using only its own internally generated context as input. GPT-4 has been shown to be able to perform a wide variety of tasks without any task specific training data such as translation, summarization, question answering, sentiment analysis and more.
    Starting Price: $0.0200 per 1000 tokens
  • 11
    GPT-3

    GPT-3

    OpenAI

    Our GPT-3 models can understand and generate natural language. We offer four main models with different levels of power suitable for different tasks. Davinci is the most capable model, and Ada is the fastest. The main GPT-3 models are meant to be used with the text completion endpoint. We also offer models that are specifically meant to be used with other endpoints. Davinci is the most capable model family and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more per API call and is not as fast as the other models.
    Starting Price: $0.0200 per 1000 tokens
  • 12
    Adept

    Adept

    Adept

    Adept is an ML research and product lab building general intelligence by enabling humans and computers to work together creatively. Designed and trained specifically for taking actions on computers in response to your natural language commands. ACT-1 is our first step towards a foundation model that can use every software tool, API and website that exists. Adept is building an entirely new way to get things done. It takes your goals, in plain language, and turns them into actions on the software you use every day. We believe that AI systems should be built with users at the center — where machines work together with people in the driver's seat, discovering new solutions, enabling more informed decisions, and giving us more time for the work we love.
  • 13
    Partek Flow
    Partek bioinformatics software delivers powerful statistical and visualization tools in an easy-to-use interface. Researchers of all skill levels are empowered to explore genomic data quicker and easier than ever before. We turn data into discovery®. Pre-installed workflows and pipelines in our intuitive point-and-click interface make sophisticated NGS and array analysis attainable for any scientist. Custom and public statistical algorithms work in concert to easily and precisely distill NGS data into biological insights. Genome browser, Venn diagrams, heat maps, and other interactive visualizations reveal the biology of your next-generation sequencing and array data in brilliant color. Our Ph.D. scientists are always just a phone call away and ready to help with your NGS analysis any time you have questions. Designed specifically for the compute-intensive needs of next-generation sequencing applications with flexible installation and user management options.
  • 14
    Cellenics

    Cellenics

    Biomage

    Turn your single-cell RNA sequencing data into meaningful insight with Cellenics software. Biomage hosts a community instance of Cellenics, an open source analytics tool for single-cell RNA sequencing data that has been developed at Harvard Medical School. It enables biologists to explore single-cell datasets without writing code and helps scientists and bioinformaticians to work together more effectively. It takes you from count matrices to publication-ready figures in just a few hours and can be integrated seamlessly with your workflow. It’s fast, interactive, and user-friendly. And it’s cloud-based, secure, and scaleable. The Biomage-hosted community instance of Cellenics is free for academic researchers with small/medium-sized datasets (up to 500,000 cells). It’s used by 3000+ academic researchers studying cancer, cardiovascular health, and developmental biology.
    Starting Price: Free
  • 15
    OpenAI

    OpenAI

    OpenAI

    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. Apply our API to any language task — semantic search, summarization, sentiment analysis, content generation, translation, and more — with only a few examples or by specifying your task in English. One simple integration gives you access to our constantly-improving AI technology. Explore how you integrate with the API with these sample completions.
  • 16
    OPT

    OPT

    Meta

    Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
  • 17
    Healthcare Data Analytics
    With more than 70% of healthcare data stored in clinical documents, reports, patient chart, clinician notes and discharge letters, our healthcare specific Natural Language Processing and AI Engine identifies the concepts, attributes and context needed to deliver business insights, optimize billing, identify and stratify patient risks, compute quality metrics or collect patient sentiment and outcome data. Leverage difficult-to-surface or entirely untapped data sources to enhance your clinical research or business intelligence. Leverage our database of thousands of clinical concepts such as genomic biomarkers, symptoms, side effects, and medications. Identify disease characteristics, medications, or risk factors from clinical documents to stratify patients and improve the quality of care. Protect the identity of data subjects while maintaining data utility through document de-identification.
  • 18
    InstructGPT
    InstructGPT is an open-source framework for training language models to generate natural language instructions from visual input. It uses a generative pre-trained transformer (GPT) model and the state-of-the-art object detector, Mask R-CNN, to detect objects in images and generate natural language sentences that describe the image. InstructGPT is designed to be effective across domains such as robotics, gaming and education; it can assist robots in navigating complex tasks with natural language instructions, or help students learn by providing descriptive explanations of processes or events.
    Starting Price: $0.0200 per 1000 tokens
  • 19
    Genome Analysis Toolkit (GATK)
    Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools with a primary focus on variant discovery and genotyping. Its powerful processing engine and high-performance computing features make it capable of taking on projects of any size. The GATK is the industry standard for identifying SNPs and indels in germline DNA and RNAseq data. Its scope is now expanding to include somatic short variant calling and to tackle copy number (CNV) and structural variation (SV). In addition to the variant callers themselves, the GATK also includes many utilities to perform related tasks such as processing and quality control of high-throughput sequencing data and bundles the popular Picard toolkit. These tools were primarily designed to process exomes and whole genomes generated with Illumina sequencing technology, but they can be adapted to handle a variety of other technologies and experimental designs.
    Starting Price: Free
  • 20
    BioTuring Browser

    BioTuring Browser

    BioTuring Browser

    Explore hundreds of curated single-cell transcriptome datasets, along with your own data, through interactive visualizations and analytics. The software also supports multimodal omics, CITE-seq, TCR-seq, and spatial transcriptomic. Interactively explore the world's largest single-cell expression database. Access and query insights from a single-cell database of millions of cells, fully annotated with cell type labels and experimental metadata. Not just creating a gateway to published works, BioTuring Browser is an end-to-end solution for your own single-cell data. Import your fastq files, count matrices, Seurat, or Scanpy objects, and reveal the biological stories inside them. Get a rich package of visualizations and analyses in an intuitive interface, making insight mining from any curated or in-house single-cell dataset become such a breeze. Import single-cell CRISPR screening or Perturb-seq data, and query guide RNA sequences.
    Starting Price: Free
  • 21
    GeoMx Digital Spatial Profiler (DSP)
    Quickly resolve tissue heterogeneity and the complexity of microenvironments with the GeoMx Digital Spatial Profiler (DSP), the most flexible and robust spatial multi-omic platform for analysis of FFPE and fresh frozen tissue sections. GeoMx is the only spatial biology platform that non-destructively profiles the expression of RNA and protein from distinct tissue compartments and cell populations with an automated and scalable workflow that integrates with standard histology staining. Spatially profile the whole transcriptome and 570+ protein targets separately or simultaneously from your choice of sample inputs: whole tissue sections, tissue microarrays (TMAs), or organoids. Make GeoMx DSP your spatial biology platform of choice for biomarker discovery and hypothesis testing. Decide where to draw the line and let the tissue be your guide with biology-driven profiling that empowers you to choose the tissue microenvironments and cell types that matter most to you.
  • 22
    GenomeBrowse

    GenomeBrowse

    Golden Helix

    This free tool delivers stunning visualizations of your genomic data that give you the power to see what is occurring at each base pair in your samples. GenomeBrowse runs as a native desktop application on your computer. No longer do you have to sacrifice speed and interface quality to obtain a consistent cross-platform experience. It was developed with performance in mind to deliver a faster and more fluid browsing experience than any other genome browser available. GenomeBrowse is also integrated into the powerful Golden Helix VarSeq variant annotation and interpretation platform. If you love the visualization experience of GenomeBrowse, check out VarSeq for filtering, annotating, and analyzing your data before utilizing the same visualization interface. GB can display all your alignment data. Looking at all your samples in one view can help you spot contextually relevant findings.
    Starting Price: Free
  • 23
    Medical LLM

    Medical LLM

    John Snow Labs

    John Snow Labs' Medical LLM is an advanced, domain-specific large language model (LLM) designed to revolutionize the way healthcare organizations harness the power of artificial intelligence. This innovative platform is tailored specifically for the healthcare industry, combining cutting-edge natural language processing (NLP) capabilities with a deep understanding of medical terminology, clinical workflows, and regulatory requirements. The result is a powerful tool that enables healthcare providers, researchers, and administrators to unlock new insights, improve patient outcomes, and drive operational efficiency. At the heart of the Healthcare LLM is its comprehensive training on vast amounts of healthcare data, including clinical notes, research papers, and regulatory documents. This specialized training allows the model to accurately interpret and generate medical text, making it an invaluable asset for tasks such as clinical documentation, automated coding, and medical research.
  • 24
    Recursion

    Recursion

    Recursion

    We are a clinical-stage biotechnology company decoding biology by integrating technological innovations across biology, chemistry, automation, machine learning and engineering to industrialize drug discovery. Increased control over biology with tools such as CRISPR genome editing and synthetic biology. Reliable automation of complex laboratory research at an unprecedented scale using advanced robotics. Iterative analysis of, and inference from, large, complex in-house datasets using neural network architectures. Increasing elasticity of high-performance computation using cloud solutions. We are leveraging new technology to create virtuous cycles of learning around datasets to build a next-generation biopharmaceutical company. A synchronized combination of hardware, software and data used to industrialize drug discovery. Reshaping the traditional drug discovery funnel. One of the largest, broadest and deepest pipelines of any technology-enabled drug discovery company.
  • 25
    Qwen

    Qwen

    Alibaba

    Qwen LLM refers to a family of large language models (LLMs) developed by Alibaba Cloud's Damo Academy. These models are trained on a massive dataset of text and code, allowing them to understand and generate human-like text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Here are some key features of Qwen LLMs: Variety of sizes: The Qwen series ranges from 1.8 billion to 72 billion parameters, offering options for different needs and performance levels. Open source: Some versions of Qwen are open-source, which means their code is publicly available for anyone to use and modify. Multilingual support: Qwen can understand and translate multiple languages, including English, Chinese, and French. Diverse capabilities: Besides generation and translation, Qwen models can be used for tasks like question answering, text summarization, and code generation.
    Starting Price: Free
  • 26
    CodeGemma
    CodeGemma is a collection of powerful, lightweight models that can perform a variety of coding tasks like fill-in-the-middle code completion, code generation, natural language understanding, mathematical reasoning, and instruction following. CodeGemma has 3 model variants, a 7B pre-trained variant that specializes in code completion and generation from code prefixes and/or suffixes, a 7B instruction-tuned variant for natural language-to-code chat and instruction following; and a state-of-the-art 2B pre-trained variant that provides up to 2x faster code completion. Complete lines, and functions, and even generate entire blocks of code, whether you're working locally or using Google Cloud resources. Trained on 500 billion tokens of primarily English language data from web documents, mathematics, and code, CodeGemma models generate code that's not only more syntactically correct but also semantically meaningful, reducing errors and debugging time.
  • 27
    BLOOM

    BLOOM

    BigScience

    BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks.
  • 28
    ChatGLM

    ChatGLM

    Zhipu AI

    ChatGLM-6B is an open-source, Chinese-English bilingual dialogue language model based on the General Language Model (GLM) architecture with 6.2 billion parameters. Combined with model quantization technology, users can deploy locally on consumer-grade graphics cards (only 6GB of video memory is required at the INT4 quantization level). ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese Q&A and dialogue. After about 1T identifiers of Chinese and English bilingual training, supplemented by supervision and fine-tuning, feedback self-help, human feedback reinforcement learning and other technologies, ChatGLM-6B with 6.2 billion parameters has been able to generate answers that are quite in line with human preferences.
    Starting Price: Free
  • 29
    EXAONE
    EXAONE is a large language model developed by LG AI Research with the goal of nurturing "Expert AI" in multiple domains. The Expert AI Alliance was formed as a collaborative effort among leading companies in various fields to advance the capabilities of EXAONE. Partner companies within the alliance will serve as mentors, providing skills, knowledge, and data to help EXAONE gain expertise in relevant domains. EXAONE, described as being akin to a college student who has completed general elective courses, requires additional intensive training to become an expert in specific areas. LG AI Research has already demonstrated EXAONE's abilities through real-world applications, such as Tilda, an AI human artist that debuted at New York Fashion Week, as well as AI applications for summarizing customer service conversations and extracting information from complex academic papers.
  • 30
    Flip AI

    Flip AI

    Flip AI

    Our large language model (LLM) can understand and reason through any and all observability data, including unstructured data, so that you can rapidly restore software and systems to health. Our LLM has been trained to understand and mitigate thousands of critical incidents, across every type of architecture imaginable – giving enterprise developers access to the world’s best debugging expert. Our LLM was built to solve the hardest part of the software engineering process – debugging production incidents. Our model requires no training and works on any observability data system. It can learn based on feedback and finetune based on past incidents and patterns in your environment while keeping your data in your boundaries. This means you are resolving critical incidents using Flip in seconds.
  • 31
    Med-PaLM 2

    Med-PaLM 2

    Google Cloud

    Healthcare breakthroughs change the world and bring hope to humanity through scientific rigor, human insight, and compassion. We believe AI can contribute to this, with thoughtful collaboration between researchers, healthcare organizations, and the broader ecosystem. Today, we're sharing exciting progress on these initiatives, with the announcement of limited access to Google’s medical large language model, or LLM, called Med-PaLM 2. It will be available in the coming weeks to a select group of Google Cloud customers for limited testing, to explore use cases and share feedback as we investigate safe, responsible, and meaningful ways to use this technology. Med-PaLM 2 harnesses the power of Google’s LLMs, aligned to the medical domain to more accurately and safely answer medical questions. As a result, Med-PaLM 2 was the first LLM to perform at an “expert” test-taker level performance on the MedQA dataset of US Medical Licensing Examination (USMLE)-style questions.
  • 32
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
    Starting Price: Free
  • 33
    Mistral 7B

    Mistral 7B

    Mistral AI

    We tackle the hardest problems to make AI models compute efficient, helpful and trustworthy. We spearhead the family of open models, we give to our users and empower them to contribute their ideas. Mistral-7B-v0.1 is a small, yet powerful model adaptable to many use-cases. Mistral 7B is better than Llama 2 13B on all benchmarks, has natural coding abilities, and 8k sequence length. It’s released under Apache 2.0 license, and we made it easy to deploy on any cloud.
  • 34
    Smaug-72B
    Smaug-72B is a powerful open-source large language model (LLM) known for several key features: High Performance: It currently holds the top spot on the Hugging Face Open LLM leaderboard, surpassing models like GPT-3.5 in various benchmarks. This means it excels at tasks like understanding, responding to, and generating human-like text. Open Source: Unlike many other advanced LLMs, Smaug-72B is freely available for anyone to use and modify, fostering collaboration and innovation in the AI community. Focus on Reasoning and Math: It specifically shines in handling reasoning and mathematical tasks, attributing this strength to unique fine-tuning techniques developed by Abacus AI, the creators of Smaug-72B. Based on Qwen-72B: It's technically a fine-tuned version of another powerful LLM called Qwen-72B, released by Alibaba, further improving upon its capabilities. Overall, Smaug-72B represents a significant step forward in open-source AI.
    Starting Price: Free
  • 35
    LLaMA

    LLaMA

    Meta

    LLaMA (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like LLaMA is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making LLaMA available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a LLaMA model card that details how we built the model in keeping with our approach to Responsible AI practices.
  • 36
    RedPajama

    RedPajama

    RedPajama

    Foundation models such as GPT-4 have driven rapid improvement in AI. However, the most powerful models are closed commercial models or only partially open. RedPajama is a project to create a set of leading, fully open-source models. Today, we are excited to announce the completion of the first step of this project: the reproduction of the LLaMA training dataset of over 1.2 trillion tokens. The most capable foundation models today are closed behind commercial APIs, which limits research, customization, and their use with sensitive data. Fully open-source models hold the promise of removing these limitations, if the open community can close the quality gap between open and closed models. Recently, there has been much progress along this front. In many ways, AI is having its Linux moment. Stable Diffusion showed that open-source can not only rival the quality of commercial offerings like DALL-E but can also lead to incredible creativity from broad participation by communities.
    Starting Price: Free
  • 37
    Cufflinks

    Cufflinks

    Cole Trapnell

    Cufflinks assemble transcripts, estimate their abundances and test for differential expression and regulation in RNA-Seq samples. It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts. Cufflinks then estimates the relative abundances of these transcripts based on how many reads support each one, taking into account biases in library preparation protocols. Cufflinks was originally developed as part of a collaborative effort between the Laboratory for Mathematical and Computational Biology. In order to make it easy to install Cufflinks, we provide a few binary packages to save users from the occasionally frustrating process of building Cufflinks, which requires that you install the libraries. Cufflinks includes a number of tools for analyzing RNA-Seq experiments. Some of these tools can be run on their own, while others are pieces of a larger workflow.
    Starting Price: Free
  • 38
    Code Llama
    Code Llama is a large language model (LLM) that can use text prompts to generate code. Code Llama is state-of-the-art for publicly available LLMs on code tasks, and has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts. Code Llama is free for research and commercial use. Code Llama is built on top of Llama 2 and is available in three models: Code Llama, the foundational code model; Codel Llama - Python specialized for Python; and Code Llama - Instruct, which is fine-tuned for understanding natural language instructions.
    Starting Price: Free
  • 39
    MEGA

    MEGA

    MEGA

    MEGA (Molecular Evolutionary Genetics Analysis) is a powerful and user-friendly software suite designed for analyzing DNA and protein sequence data from species and populations. It facilitates both automatic and manual sequence alignment, phylogenetic tree inference, and evolutionary hypothesis testing. MEGA supports a variety of statistical methods including maximum likelihood, Bayesian inference, and ordinary least squares, making it an essential tool for comparative sequence analysis and understanding molecular evolution. MEGA offers advanced features such as real-time caption generation to help explain the results and methods used in analysis and the maximum composite likelihood method for estimating evolutionary distances. The software is equipped with robust visual tools like the alignment/trace editor and tree explorer and supports multi-threading for efficient processing. MEGA can be run on multiple operating systems, including Windows, Linux, and macOS.
    Starting Price: Free
  • 40
    LUIS

    LUIS

    Microsoft

    Language Understanding (LUIS): A machine learning-based service to build natural language into apps, bots, and IoT devices. Quickly create enterprise-ready, custom models that continuously improve. Add natural language to your apps. Designed to identify valuable information in conversations, LUIS interprets user goals (intents) and distills valuable information from sentences (entities), for a high quality, nuanced language model. LUIS integrates seamlessly with the Azure Bot Service, making it easy to create a sophisticated bot. Powerful developer tools are combined with customizable pre-built apps and entity dictionaries, such as Calendar, Music, and Devices, so you can build and deploy a solution more quickly. Dictionaries are mined from the collective knowledge of the web and supply billions of entries, helping your model to correctly identify valuable information from user conversations. Active learning is used to continuously improve the quality of the models.
  • 41
    Eidogen-Sertanty Target Informatics Platform (TIP)
    Eidogen-Sertanty's Target Informatics Platform (TIP) is the world's first structural informatics system and knowledgebase that enables researchers with the ability to interrogate the druggable genome from a structural perspective. TIP amplifies the rapidly expanding body of experimental protein structure information and transforms structure-based drug discovery from a low-throughput, data-scarce discipline into a high-throughput, data-rich science. Designed to help bridge the knowledge gap between bioinformatics and cheminformatics, TIP supplies drug discovery researchers with a knowledge base of information that is both distinct from and highly complementary to information furnished by existing bio- and cheminformatics platforms. TIP's seamless integration of structural data management technology with unique target-to-lead calculation and analysis capabilities enhances all stages of the discovery pipeline.
  • 42
    Qwen2

    Qwen2

    Alibaba

    Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud. Qwen2 is a series of large language models developed by the Qwen team at Alibaba Cloud. It includes both base language models and instruction-tuned models, ranging from 0.5 billion to 72 billion parameters, and features both dense models and a Mixture-of-Experts model. The Qwen2 series is designed to surpass most previous open-weight models, including its predecessor Qwen1.5, and to compete with proprietary models across a broad spectrum of benchmarks in language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.
    Starting Price: Free
  • 43
    OpenELM

    OpenELM

    Apple

    OpenELM is an open-source language model family developed by Apple. It uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy compared to existing open language models of similar size. OpenELM is trained on publicly available datasets and achieves state-of-the-art performance for its size.
  • 44
    Inflection AI

    Inflection AI

    Inflection AI

    Inflection AI is a cutting-edge artificial intelligence research and development company focused on creating advanced AI systems designed to interact with humans in more natural, intuitive ways. Founded in 2022 by entrepreneurs such as Mustafa Suleyman, one of the co-founders of DeepMind, and Reid Hoffman, co-founder of LinkedIn, the company's mission is to make powerful AI more accessible and aligned with human values. Inflection AI specializes in building large-scale language models that enhance human-AI communication, aiming to transform industries ranging from customer service to personal productivity through intelligent, responsive, and ethically designed AI systems. The company's focus on safety, transparency, and user control ensures that their innovations contribute positively to society while addressing potential risks associated with AI technology.
    Starting Price: Free
  • 45
    DeepSeek Coder
    DeepSeek Coder is a cutting-edge software tool designed to revolutionize the landscape of data analysis and coding. By leveraging advanced machine learning algorithms and natural language processing capabilities, it empowers users to seamlessly integrate data querying, analysis, and visualization into their workflow. The intuitive interface of DeepSeek Coder enables both novice and experienced programmers to efficiently write, test, and optimize code. Its robust set of features includes real-time syntax checking, intelligent code completion, and comprehensive debugging tools, all designed to streamline the coding process. Additionally, DeepSeek Coder's ability to understand and interpret complex data sets ensures that users can derive meaningful insights and create sophisticated data-driven applications with ease.
    Starting Price: Free
  • 46
    Qwen2-VL

    Qwen2-VL

    Alibaba

    Qwen2-VL is the latest version of the vision language models based on Qwen2 in the Qwen model familities. Compared with Qwen-VL, Qwen2-VL has the capabilities of: SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. Understanding videos of 20 min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images
    Starting Price: Free
  • 47
    Bioconductor

    Bioconductor

    Bioconductor

    The Bioconductor project aims to develop and share open source software for precise and repeatable analysis of biological data. We foster an inclusive and collaborative community of developers and data scientists. Resources to maximize the potential of Bioconductor. From basic functionalities to advanced features, our tutorials, guides, and documentation have you covered. Bioconductor uses the R statistical programming language and is open source and open development. It has two releases each year and an active user community. Bioconductor provides Docker images for every release and provides support for Bioconductor use in AnVIL. Founded in 2001, Bioconductor is an open-source software project widely used in bioinformatics and biomedical research. It hosts over 2,000 R packages contributed by over 1,000 developers, with over 40 million downloads per year. Bioconductor has been cited in more than 60,000 scientific publications.
    Starting Price: Free
  • 48
    AI21 Studio

    AI21 Studio

    AI21 Studio

    AI21 Studio provides API access to Jurassic-1 large-language-models. Our models power text generation and comprehension features in thousands of live applications. Take on any language task. Our Jurassic-1 models are trained to follow natural language instructions and require just a few examples to adapt to new tasks. Use our specialized APIs for common tasks like summarization, paraphrasing and more. Access superior results at a lower cost without reinventing the wheel. Need to fine-tune your own custom model? You're just 3 clicks away. Training is fast, affordable and trained models are deployed immediately. Give your users superpowers by embedding an AI co-writer in your app. Drive user engagement and success with features like long-form draft generation, paraphrasing, repurposing and custom auto-complete.
    Starting Price: $29 per month
  • 49
    Jinni

    Jinni

    Jinni

    Jinni's taste-based content-to-audience platform provides revolutionary personalization solutions for video content discovery and targeted digital advertising for entertainment brands. Through its unique Entertainment Genome™, consisting of thousands of distinct content attributes or "genes", Jinni not only understands the most subtle differences in TV and movie entertainment content but also understands each individual's unique entertainment tastes, thereby providing the perfect match between individual and content titles! Our mission is to be the best-in-class content-to-audience platform for entertainment brands, using one platform to match & promote entertainment content to the right audiences, dramatically increasing profitability for platform operators and entertainment advertisers. Jinni's semantic algorithms that match content to users' personal tastes have been setting the direction for the next generation of content discovery & recommendations for the industry.
  • 50
    CZ CELLxGENE Discover
    Select two custom cell groups based on metadata to find their top differentially expressed genes. Leverage millions of cells from the integrated CZ CELLxGENE corpus for powerful analysis. Execute interactive analyses on a dataset to explore how patterns of gene expression are determined by spatial, environmental, and genetic factors using an interactive speed no-code UI. Understand published datasets or use them as a launchpad to identify new cell sub-types and states. Census provides access to any custom slice of standardized cell data available on CZ CELLxGENE Discover in R and Python. Explore an interactive encyclopedia of 700+ cell types that provides detailed definitions, marker genes, lineage, and relevant datasets in one place. Browse and download hundreds of standardized data collections and 1,000+ datasets characterizing the functionality of healthy mouse and human tissues.