Business Software for Clojure - Page 2

Top Software that integrates with Clojure as of July 2025 - Page 2

Clojure Clear Filters
  • 1
    OpenAI o1-mini
    OpenAI o1-mini is a new, cost-effective AI model designed for enhanced reasoning, particularly excelling in STEM fields like mathematics and coding. It's part of the o1 series, which focuses on solving complex problems by spending more time "thinking" through solutions. Despite being smaller and 80% cheaper than its sibling, the o1-preview, o1-mini performs competitively in coding tasks and mathematical reasoning, making it an accessible option for developers and enterprises looking for efficient AI solutions.
  • 2
    ChatGPT Pro
    As AI becomes more advanced, it will solve increasingly complex and critical problems. It also takes significantly more compute to power these capabilities. ChatGPT Pro is a $200 monthly plan that enables scaled access to the best of OpenAI’s models and tools. This plan includes unlimited access to our smartest model, OpenAI o1, as well as to o1-mini, GPT-4o, and Advanced Voice. It also includes o1 pro mode, a version of o1 that uses more compute to think harder and provide even better answers to the hardest problems. In the future, we expect to add more powerful, compute-intensive productivity features to this plan. ChatGPT Pro provides access to a version of our most intelligent model that thinks longer for the most reliable responses. In evaluations from external expert testers, o1 pro mode produces more reliably accurate and comprehensive responses, especially in areas like data science, programming, and case law analysis.
    Starting Price: $200/month
  • 3
    Gemini-Exp-1206
    Gemini-Exp-1206 is an experimental AI model now available for preview to Gemini Advanced subscribers. This model significantly enhances performance in complex tasks such as coding, mathematics, reasoning, and following detailed instructions. It's designed to assist users in navigating intricate challenges with greater ease. As an early preview, some features may not function as expected, and it currently lacks access to real-time information. Users can access Gemini-Exp-1206 through the Gemini model drop-down on desktop and mobile web platforms.
  • 4
    Grok 4
    Grok 4 is the latest AI model from Elon Musk’s xAI, marking a significant advancement in AI reasoning and natural language understanding. Developed on the Colossus supercomputer, Grok 4 supports multimodal inputs including text and images, with plans to add video capabilities soon. It features enhanced precision in language tasks and has demonstrated superior performance in scientific reasoning and visual problem-solving compared to other leading AI models. Designed for developers, researchers, and technical users, Grok 4 offers powerful tools for complex tasks. The model incorporates improved moderation to address previous concerns about biased or problematic outputs. Grok 4 represents a major leap forward in AI’s ability to understand and generate human-like responses.
  • 5
    Gemini Pro
    Gemini is natively multimodal, which gives you the potential to transform any type of input into any type of output. We've built Gemini responsibly from the start, incorporating safeguards and working together with partners to make it safer and more inclusive. Integrate Gemini models into your applications with Google AI Studio and Google Cloud Vertex AI.
  • 6
    Gemini 2.0 Flash
    The Gemini 2.0 Flash AI model represents the next generation of high-speed, intelligent computing, designed to set new benchmarks in real-time language processing and decision-making. Building on the robust foundation of its predecessor, it incorporates enhanced neural architecture and breakthrough advancements in optimization, enabling even faster and more accurate responses. Gemini 2.0 Flash is designed for applications requiring instantaneous processing and adaptability, such as live virtual assistants, automated trading systems, and real-time analytics. Its lightweight, efficient design ensures seamless deployment across cloud, edge, and hybrid environments, while its improved contextual understanding and multitasking capabilities make it a versatile tool for tackling complex, dynamic workflows with precision and speed.
  • 7
    Gemini 1.5 Pro
    The Gemini 1.5 Pro AI model is a state-of-the-art language model designed to deliver highly accurate, context-aware, and human-like responses across a variety of applications. Built with cutting-edge neural architecture, it excels in natural language understanding, generation, and reasoning tasks. The model is fine-tuned for versatility, supporting tasks like content creation, code generation, data analysis, and complex problem-solving. Its advanced algorithms ensure nuanced comprehension, enabling it to adapt to different domains and conversational styles seamlessly. With a focus on scalability and efficiency, the Gemini 1.5 Pro is optimized for both small-scale implementations and enterprise-level integrations, making it a powerful tool for enhancing productivity and innovation.
  • 8
    Gemini 1.5 Flash
    The Gemini 1.5 Flash AI model is an advanced, high-speed language model engineered for lightning-fast processing and real-time responsiveness. Designed to excel in dynamic and time-sensitive applications, it combines streamlined neural architecture with cutting-edge optimization techniques to deliver exceptional performance without compromising on accuracy. Gemini 1.5 Flash is tailored for scenarios requiring rapid data processing, instant decision-making, and seamless multitasking, making it ideal for chatbots, customer support systems, and interactive applications. Its lightweight yet powerful design ensures it can be deployed efficiently across a range of platforms, from cloud-based environments to edge devices, enabling businesses to scale their operations with unmatched agility.
  • 9
    Replit

    Replit

    Replit

    Use our free, collaborative, in-browser IDE to code in 50+ languages — without spending a second on setup. Start coding with your favorite language on any platform, OS, and device. Invite your friends, teammates, and colleagues right into your code with Google-docs like editing. Import, run, and collaborate on millions of GitHub repos with 0 manual setup. From Python, to C++, to HTML and CSS, stay in one platform to learn and code in any language you want. The second you create a new repl, it's instantly live and sharable with the world. Learn how to code from 3 million+ passionate programmers, technologists, creatives, and learners of all kinds. Make your team more productive with interactive docs, real-time collaboration, and 0-hassle remote interviewing. Create apps programatically, spin up bots and customize the IDE with plugins to fit your needs.
    Starting Price: $7 per month
  • 10
    AnyChart

    AnyChart

    AnyChart

    AnyChart is an award-winning, flexible JavaScript (HTML5) charting library designed to cover all your needs in data visualization across platforms. Create interactive, beautiful charts, maps, and dashboards for any web, mobile, or standalone project. Designed for developers and businesses alike, AnyChart offers massive out-of-the-box capabilities, supporting 90+ chart types — from line and bar charts to Gantt charts, stock charts, and geospatial visualizations. It easily integrates with any technology stack and connects to any data source. Whether enhancing reports, embedding dashboards into SaaS or on-premises systems, or building entirely new solutions, AnyChart delivers flexibility, simplicity, and powerful results fast. Fully customizable and responsive, it ensures your visuals look great on any device. Trusted by Fortune 500 companies and thousands of developers worldwide. Start creating professional charts, maps, and dashboards with ease — download AnyChart JS today!
    Starting Price: $49
  • 11
    Ably

    Ably

    Ably

    Ably is the definitive realtime experience platform. We power more WebSocket connections than any other pub/sub platform, serving over a billion devices monthly. Businesses like HubSpot, NASCAR and Webflow trust us to power their critical applications - reliably, securely and at serious scale. Ably’s products place composable realtime in the hands of developers. Simple APIs and SDKs for every tech stack, enable the creation of a host of live experiences - including chat, collaboration, notifications, broadcast and fan engagement. All powered by our scalable infrastructure.
    Starting Price: $49.99/month
  • 12
    CodeScene

    CodeScene

    CodeScene

    CodeScene is a code analysis, visualization, and reporting tool. Cross reference contextual factors such as code quality, team dynamics, and delivery output to get actionable insights to effectively reduce technical debt and deliver better code quality. We enable software development teams to make confident, data-driven decisions that fuel performance and developer productivity. Supporting 28+ programming languages, CodeScene also offers an automated integration with GitHub, BitBucket, Azure DevOps or GitLab pull requests to incorporate the analysis results into existing delivery workflows. Automate your code reviews, get early warnings and recommendations about complex code before merging it to the main branch and set quality gates to trigger in case your code health declines.
    Starting Price: €18 per active author/month
  • 13
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
    Starting Price: Free
  • 14
    Mistral 7B

    Mistral 7B

    Mistral AI

    Mistral 7B is a 7.3-billion-parameter language model that outperforms larger models like Llama 2 13B across various benchmarks. It employs Grouped-Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences. Released under the Apache 2.0 license, Mistral 7B is accessible for deployment across diverse platforms, including local environments and major cloud services. Additionally, a fine-tuned version, Mistral 7B Instruct, demonstrates enhanced performance in instruction-following tasks, surpassing models like Llama 2 13B Chat.
    Starting Price: Free
  • 15
    Codestral Mamba
    As a tribute to Cleopatra, whose glorious destiny ended in tragic snake circumstances, we are proud to release Codestral Mamba, a Mamba2 language model specialized in code generation, available under an Apache 2.0 license. Codestral Mamba is another step in our effort to study and provide new architectures. It is available for free use, modification, and distribution, and we hope it will open new perspectives in architecture research. Mamba models offer the advantage of linear time inference and the theoretical ability to model sequences of infinite length. It allows users to engage with the model extensively with quick responses, irrespective of the input length. This efficiency is especially relevant for code productivity use cases, this is why we trained this model with advanced code and reasoning capabilities, enabling it to perform on par with SOTA transformer-based models.
    Starting Price: Free
  • 16
    Mistral NeMo

    Mistral NeMo

    Mistral AI

    Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.
    Starting Price: Free
  • 17
    Mixtral 8x22B

    Mixtral 8x22B

    Mistral AI

    Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is fluent in English, French, Italian, German, and Spanish. It has strong mathematics and coding capabilities. It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernization at scale. Its 64K tokens context window allows precise information recall from large documents. We build models that offer unmatched cost efficiency for their respective sizes, delivering the best performance-to-cost ratio within models provided by the community. Mixtral 8x22B is a natural continuation of our open model family. Its sparse activation patterns make it faster than any dense 70B model.
    Starting Price: Free
  • 18
    Zed

    Zed

    Zed Industries

    Zed is a next-generation code editor designed for high-performance collaboration with humans and AI. Written from scratch in Rust to efficiently leverage multiple CPU cores and your GPU. Integrate upcoming LLMs into your workflow to generate, transform, and analyze code. Chat with teammates, write notes together, and share your screen and project. Multibuffers compose excerpts from across the codebase in one editable surface. Evaluate code inline via Jupyter runtimes and collaboratively edit notebooks. Support for many languages via Tree-sitter, WebAssembly, and the Language Server Protocol. Fast native terminal tightly integrates with Zed's language-aware task runner and AI capabilities. First-class modal editing via Vim bindings, including features like text objects and marks. Zed is built by a global community of thousands of developers. Boost your Zed experience by choosing from hundreds of extensions that broaden language support, offer different themes, and more.
    Starting Price: Free
  • 19
    Qwen2.5

    Qwen2.5

    Alibaba

    Qwen2.5 is an advanced multimodal AI model designed to provide highly accurate and context-aware responses across a wide range of applications. It builds on the capabilities of its predecessors, integrating cutting-edge natural language understanding with enhanced reasoning, creativity, and multimodal processing. Qwen2.5 can seamlessly analyze and generate text, interpret images, and interact with complex data to deliver precise solutions in real time. Optimized for adaptability, it excels in personalized assistance, data analysis, creative content generation, and academic research, making it a versatile tool for professionals and everyday users alike. Its user-centric design emphasizes transparency, efficiency, and alignment with ethical AI practices.
    Starting Price: Free
  • 20
    Tülu 3
    Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.
    Starting Price: Free
  • 21
    FOSSA

    FOSSA

    FOSSA

    Scalable, end-to-end management for third-party code, license compliance, and Open Source has become the critical supplier for modern software companies, changing everything about how people think about their code. FOSSA builds the infrastructure for modern teams to be successful with open source. FOSSA's flagship product helps teams track the open source used in their code and automate license scanning and compliance. Since then, over 7,000 open source projects (Kubernetes, Webpack, Terraform, ESLint) and companies ( Uber, Ford, Zendesk, Motorola) rely on FOSSA's tools to ship software. If you are in the software industry today, you're now using code that runs FOSSA. FOSSA is a venture-funded company backed by Cosanoa Ventures, Bain Capital Ventures, etc. with affiliate angels including Marc Benioff (Salesforce), Steve Chen (YouTube), Amr Awadallah (Cloudera), Jaan Tallin (Skype), and Justin Mateen (Tinder).
    Starting Price: $230 per month
  • 22
    Codecov

    Codecov

    Codecov

    Develop healthier code. Improve your code review workflow and quality. Codecov provides highly integrated tools to group, merge, archive, and compare coverage reports. Free for open source. Plans starting at $10/user per month. Ruby, Python, C++, Javascript, and more. Plug and play into any CI product and workflow. No setup required. Automatic report merging for all CI and languages into a single report. Get custom statuses on any group of coverage metrics. Review coverage reports by project, folder and type test (unit tests vs integration tests). Detailed report commented directly into your pull request. Codecov is SOC 2 Type II certified, which means a third-party audits and attests to our practices to secure our systems and your data.
    Starting Price: $10 per user per month
  • 23
    cloverage

    cloverage

    cloverage

    Cloverage uses clojure.test by default. If you prefer use midje, pass the --runner :midje flag. (In older versions of Cloverage, you had to wrap your midje tests in clojure.test's deftest. This is no longer necessary.) For using eftest, pass the --runner :eftest flag. Optionally you could configure a runner passing :runner-opts with a map in project settings. Other test libraries may ship with their own support for Cloverage external to this library; see their documentation for details.
    Starting Price: Free
  • 24
    Refraction

    Refraction

    Refraction

    Refraction is a code-generation tool for developers. It uses AI to generate code for you. You can use it to generate unit tests, documentation, refactor code, and more. Generate code using AI in 34 languages — Assembly, C#, C++, CoffeeScript, CSS, Dart, Elixir, Erlang, Go, GraphQL, Groovy, Haskell, HTML, Java, JavaScript, Kotlin, LaTeX, Less, Lua, MatLab, Objective-C, OCaml, Perl, PHP, Python, R Lang, Ruby, Rust, Sass / SCSS, Scala, Shell, SQL, Swift, and TypeScript. Join thousands of developers around the world using Refraction to generate documentation, create unit tests, refactor code, and more using AI. Use the power of AI to automate the tedious parts of software development like testing, documentation, and refactoring, so you can focus on what matters. Refactor, optimize, fix and style-check your code. Generate unit tests for your code with various test frameworks. Explain the purpose of your code to make it easier to understand.
    Starting Price: $8 per month
  • 25
    Falcon-40B

    Falcon-40B

    Technology Innovation Institute (TII)

    Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-40B? It is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions. ⚠️ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-40B-Instruct.
    Starting Price: Free
  • 26
    Falcon-7B

    Falcon-7B

    Technology Innovation Institute (TII)

    Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-7B? It outperforms comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions.
    Starting Price: Free
  • 27
    Baichuan-13B

    Baichuan-13B

    Baichuan Intelligent Technology

    Baichuan-13B is an open source and commercially available large-scale language model containing 13 billion parameters developed by Baichuan Intelligent following Baichuan -7B . It has achieved the best results of the same size on authoritative Chinese and English benchmarks. This release contains two versions of pre-training ( Baichuan-13B-Base ) and alignment ( Baichuan-13B-Chat ). Larger size, more data : Baichuan-13B further expands the number of parameters to 13 billion on the basis of Baichuan -7B , and trains 1.4 trillion tokens on high-quality corpus, which is 40% more than LLaMA-13B. It is currently open source The model with the largest amount of training data in the 13B size. Support Chinese and English bilingual, use ALiBi position code, context window length is 4096.
    Starting Price: Free
  • 28
    Koyeb

    Koyeb

    Koyeb

    Push code to production, everywhere, in minutes with Koyeb. Accelerate backend apps at the edge with high-performance hardware. Connect your GitHub account to Koyeb, choose a repository to deploy, and leave us the infrastructure. We build, deploy, run, and scale your application with zero configuration. Simply git push, and we build and deploy your app with blazing fast built-in continuous deployment. Develop fearlessly with native versioning of all deployments. Build Docker containers, host them on any registry, and atomically deploy your new version worldwide in a single API call. Invite your team to build together and enjoy a live preview after each push with built-in CI/CD. The Koyeb platform lets you combine the languages, frameworks, and technologies you use. Deploy any application without modifications thanks to native support of popular languages and Docker containers. Koyeb detects and builds apps in Node.js, Python, Go, Ruby, Java, PHP, Scala, Clojure, and more.
    Starting Price: $2.7 per month
  • 29
    Mixtral 8x7B

    Mixtral 8x7B

    Mistral AI

    Mixtral 8x7B is a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT-3.5 on most standard benchmarks.
    Starting Price: Free
  • 30
    Polar Signals

    Polar Signals

    Polar Signals

    Polar Signals cloud is an always-on, zero-instrumentation continuous profiling product that helps improve performance, understand incidents, and lower infrastructure costs. With just one command and the easiest onboarding guide you’ll ever see, you can start saving costs and optimizing performance in your infrastructure. Travel back in time to pinpoint incidents and issues. Profiling data provides unique insight and depth into what a process executed over time. Utilize profiling data collected over time to confidently and statistically identify hot paths for optimization. Many organizations have 20-30% of resources wasted with code paths that could be easily optimized. Polar Signals Cloud employs an exceptional blend of technologies, purpose-built to deliver the profiling toolset essential for today's evolving infrastructure and applications. With a zero instrumentation setup, deploy immediately and reap the benefits of actionable observability data.
    Starting Price: $50 per month