1242 Integrations with Python
View a list of Python integrations and software that integrates with Python below. Compare the best Python integrations as well as features, ratings, user reviews, and pricing of software that integrates with Python. Here are the current Python integrations in 2025:
-
1
Azure DevOps Labs
Microsoft
Azure DevOps Labs is a free, community-driven collection of self-paced, hands-on tutorials designed to teach every aspect of the Azure DevOps toolchain and related DevOps practices. From configuring Agile planning with Azure Boards and version control in Azure Repos to defining build and release pipelines as code with YAML, enabling CI/CD in Azure Pipelines, managing packages in Azure Artifacts, and orchestrating tests with Azure Test Plans, each lab provides step-by-step exercises and sample code repositories. You can spin up ready-made projects using the Azure DevOps Demo Generator, explore end-to-end scenarios like deploying Docker-based web applications, integrating Terraform for infrastructure-as-code, scanning for security vulnerabilities, monitoring performance with Application Insights, and automating database changes with Redgate. Prerequisites include an Azure DevOps organization and an Azure subscription, but no prior experience is required. -
2
gpt-oss-20b
OpenAI
gpt-oss-20b is a 20-billion-parameter, text-only reasoning model released under the Apache 2.0 license and governed by OpenAI’s gpt-oss usage policy, built to enable seamless integration into custom AI workflows via the Responses API without reliance on proprietary infrastructure. Trained for robust instruction following, it supports adjustable reasoning effort, full chain-of-thought outputs, and native tool use (including web search and Python execution), producing structured, explainable answers. Developers must implement their own deployment safeguards, such as input filtering, output monitoring, and usage policies, to match the system-level protections of hosted offerings and mitigate risks from malicious or unintended behaviors. Its open-weight design makes it ideal for on-premises or edge deployments where control, customization, and transparency are paramount. -
3
gpt-oss-120b
OpenAI
gpt-oss-120b is a reasoning model engineered for deep, transparent thinking, delivering full chain-of-thought explanations, adjustable reasoning depth, and structured outputs, while natively invoking tools like web search and Python execution via the API. Built to slot seamlessly into self-hosted or edge deployments, it eliminates dependence on proprietary infrastructure. Although it includes default safety guardrails, its open-weight architecture allows fine-tuning that could override built-in controls, so implementers are responsible for adding input filtering, output monitoring, and governance measures to achieve enterprise-grade security. As a community–driven model card rather than a managed service spec, it emphasizes transparency, customization, and the need for downstream safety practices. -
4
Claude Opus 4.1
Anthropic
Claude Opus 4.1 is an incremental upgrade to Claude Opus 4 that boosts coding, agentic reasoning, and data-analysis performance without changing deployment complexity. It raises coding accuracy to 74.5 percent on SWE-bench Verified and sharpens in-depth research and detailed tracking for agentic search tasks. GitHub reports notable gains in multi-file code refactoring, while Rakuten Group highlights its precision in pinpointing exact corrections within large codebases without introducing bugs. Independent benchmarks show about a one-standard-deviation improvement on junior developer tests compared to Opus 4, mirroring major leaps seen in prior Claude releases. Opus 4.1 is available now to paid Claude users, in Claude Code, and via the Anthropic API (model ID claude-opus-4-1-20250805), as well as through Amazon Bedrock and Google Cloud Vertex AI, and integrates seamlessly into existing workflows with no additional setup beyond selecting the new model. -
5
GPT-5 pro
OpenAI
GPT-5 Pro is OpenAI’s most advanced AI model, designed to tackle the most complex and challenging tasks with extended reasoning capabilities. It builds on GPT-5’s unified architecture, using scaled, efficient parallel compute to provide highly comprehensive and accurate responses. GPT-5 Pro achieves state-of-the-art performance on difficult benchmarks like GPQA, excelling in areas such as health, science, math, and coding. It makes significantly fewer errors than earlier models and delivers responses that experts find more relevant and useful. The model automatically balances quick answers and deep thinking, allowing users to get expert-level insights efficiently. GPT-5 Pro is available to Pro subscribers and powers some of the most demanding applications requiring advanced intelligence. -
6
GPT-5 thinking
OpenAI
GPT-5 Thinking is the deeper reasoning mode within the GPT-5 unified AI system, designed to tackle complex, open-ended problems that require extended cognitive effort. It works alongside the faster GPT-5 model, dynamically engaging when queries demand more detailed analysis and thoughtful responses. This mode significantly reduces hallucinations and improves factual accuracy, producing more reliable answers on challenging topics like science, math, coding, and health. GPT-5 Thinking is also better at recognizing its own limitations, communicating clearly when tasks are impossible or underspecified. It incorporates advanced safety features to minimize harmful outputs and provide nuanced, helpful answers even in ambiguous or sensitive contexts. Available to all users, it helps bring expert-level intelligence to everyday and advanced use cases alike. -
7
Lucidic AI
Lucidic AI
Lucidic AI is a specialized analytics and simulation platform built for AI agent development that brings much-needed transparency, interpretability, and efficiency to often opaque workflows. It provides developers with visual, interactive insights, including searchable workflow replays, step-by-step video, and graph-based replays of agent decisions, decision tree visualizations, and side‑by‑side simulation comparisons, that enable you to observe exactly how your agent reasons and why it succeeds or fails. The tool dramatically reduces iteration time from weeks or days to mere minutes by streamlining debugging and optimization through instant feedback loops, real‑time “time‑travel” editing, mass simulations, trajectory clustering, customizable evaluation rubrics, and prompt versioning. Lucidic AI integrates seamlessly with major LLMs and frameworks and offers advanced QA/QC mechanisms like alerts, workflow sandboxing, and more. -
8
LangMem
LangChain
LangMem is a lightweight, flexible Python SDK from LangChain that equips AI agents with long-term memory capabilities, enabling them to extract, store, update, and retrieve meaningful information from past interactions to become smarter and more personalized over time. It supports three memory types and offers both hot-path tools for real-time memory management and background consolidation for efficient updates beyond active sessions. Through a storage-agnostic core API, LangMem integrates seamlessly with any backend and offers native compatibility with LangGraph’s long-term memory store, while also allowing type-safe memory consolidation using schemas defined in Pydantic. Developers can incorporate memory tools into agents using simple primitives to enable seamless memory creation, retrieval, and prompt optimization within conversational flows. -
9
Paid.ai
Paid.ai
Paid.ai is a purpose-built platform that enables AI agent developers to seamlessly monetize, track costs, and automate billing for their autonomous agents. By capturing usage signals via lightweight SDKs, it provides real-time monitoring of LLM/API costs, margin visibility per agent, and alerts for cost spikes. Its flexible workflows facilitate multiple billing models, including per-agent, per-action, per-workflow, and outcome-based pricing, aligned with the way AI agents deliver business value. Paid.ai supports comprehensive revenue operations by automating invoice generation, offering pricing simulation tools, managing orders and payments, and embedding live value dashboards through its “Blocks” feature. Developers can integrate Paid.ai quickly into their systems using Node.js, Python, Go, or Ruby SDKs, enabling fast deployment of both cost tracking (free for the first year) and billing automation. -
10
Google Cloud Universal Ledger (GCUL) is a next-generation, permissioned layer-1 blockchain platform designed for financial institutions to manage commercial bank money and tokenized assets with unprecedented simplicity, flexibility, and security. It offers a programmable, multi-currency distributed ledger accessible via a unified API, eliminates the complexity of traditional payment infrastructure, and supports atomic settlement for near-instant transfers. Built with compliance in mind, the platform enforces KYC-verified accounts, transparent transaction fees, and private, auditable governance, while also fostering automation through programmatic workflows and integration with familiar developer tools like Python-based smart contracts. CE reactions and institutional testing underscore its real-world applicability; CME Group is piloting GCUL for tokenized settlement workflows in areas like collateral and margin processing.
-
11
PyMuPDF
Artifex
PyMuPDF is a high-performance, Python-centric library for reading, extracting, and manipulating PDFs with ease and precision. It enables developers to access text, images, fonts, annotations, metadata, and structural layout of PDF documents, and to perform tasks such as extracting content, editing objects, rendering pages, searching text, modifying page content, and manipulating PDF components like links and annotations. PyMuPDF also supports advanced operations like splitting, merging, inserting, or deleting pages; drawing and filling shapes; handling color spaces; and converting between formats. The library is lightweight but robust, optimized for speed and low memory overhead. On top of the base PyMuPDF, PyMuPDF Pro adds support for reading and writing Microsoft Office-format documents and enhanced functionality for integrating Large Language Model (LLM) pipelines and Retrieval Augmented Generation (RAG). -
12
Ghostscript
Artifex
Ghostscript is a powerful PostScript and PDF interpreter developed by Artifex, offering a rendering engine and comprehensive graphics library for high-quality document processing. It handles interpreting, processing, and rendering PostScript files and PDFs, supports complex page description language features, and includes utilities for converting, rasterizing, and manipulating documents. Ghostscript also has .NET bindings (Ghostscript.NET) so it can be integrated into .NET applications, and there’s an enterprise version (Ghostscript Enterprise) that extends capabilities to reading and processing common office documents like Word, PowerPoint, and Excel. The product is designed for precision rendering, color space management, and reliable output, making it suitable for both programmatic document workflows and production environments. -
13
Sudo
Sudo
Sudo offers “one API for all models”, a unified interface so developers can integrate multiple large language models and generative AI tools (for text, image, audio) through a single endpoint. It handles routing between different models to optimize for things like latency, throughput, cost, or whatever criteria you choose. The platform supports flexible billing and monetization options; subscription tiers, usage-based metered billing, or hybrids. It also supports in-context AI-native ads (you can insert context-aware ads into AI outputs, controlling relevance and frequency). Onboarding is quick: you create an API key, install their SDK (Python or TypeScript), and start making calls to the AI endpoints. They emphasize low latency (“optimized for real-time AI”), better throughput compared with some alternatives, and avoiding vendor lock-in. -
14
Claude Sonnet 4.5
Anthropic
Claude Sonnet 4.5 is Anthropic’s latest frontier model, designed to excel in long-horizon coding, agentic workflows, and intensive computer use while maintaining safety and alignment. It achieves state-of-the-art performance on the SWE-bench Verified benchmark (for software engineering) and leads on OSWorld (a computer use benchmark), with the ability to sustain focus over 30 hours on complex, multi-step tasks. The model introduces improvements in tool handling, memory management, and context processing, enabling more sophisticated reasoning, better domain understanding (from finance and law to STEM), and deeper code comprehension. It supports context editing and memory tools to sustain long conversations or multi-agent tasks, and allows code execution and file creation within Claude apps. Sonnet 4.5 is deployed at AI Safety Level 3 (ASL-3), with classifiers protecting against inputs or outputs tied to risky domains, and includes mitigations against prompt injection. -
15
Agent Builder
OpenAI
Agent Builder is part of OpenAI’s tooling for constructing agentic applications, systems that use large language models to perform multi-step tasks autonomously, with governance, tool integration, memory, orchestration, and observability baked in. The platform offers a composable set of primitives—models, tools, memory/state, guardrails, and workflow orchestration- that developers assemble into agents capable of deciding when to call a tool, when to act, and when to halt and hand off control. OpenAI provides a new Responses API that combines chat capabilities with built-in tool use, along with an Agents SDK (Python, JS/TS) that abstracts the control loop, supports guardrail enforcement (validations on inputs/outputs), handoffs between agents, session management, and tracing of agent executions. Agents can be augmented with built-in tools like web search, file search, or computer use, or custom function-calling tools. -
16
ChatKit
OpenAI
ChatKit is a conversational AI toolkit that lets developers embed and manage chat agents across apps and websites. It provides capabilities such as chatting over external documents, text-to-speech, prompt templates, and shortcut triggers. Users can operate ChatKit either using their own OpenAI API key (paying according to OpenAI’s token pricing) or via ChatKit’s credit system (which requires a ChatKit license). ChatKit supports integrations with diverse model backends (including OpenAI, Azure OpenAI, Google Gemini, Ollama) and routing frameworks (e.g., OpenRouter). Feature offerings include cloud sync, team collaboration, web access, launcher widgets, shortcuts, and structured conversation flows over documents. In sum, ChatKit simplifies deploying intelligent chat agents without building the full chat infrastructure from scratch. -
17
PromptCompose
PromptCompose
PromptCompose is a prompt infrastructure platform designed to bring software engineering rigor to prompt workflows. It offers version control for prompts, automatically tracking every change with deployment logs, side-by-side comparisons, and rollback capability, and integrates AB testing so multiple prompt variants can run concurrently, traffic can be split, performance tracked, and winners deployed confidently. Developers can integrate seamlessly via SDKs (JavaScript/TypeScript) or REST APIs so prompts and experiments can be part of production systems. Projects are organized in a hub structure so teams can manage resources (prompts, templates, variable groups, tests) per project, with proper isolation and collaboration. PromptCompose supports prompt blueprints (templates) and variable groups so prompts can be parameterized with dynamic inputs in a consistent, reusable way. The editor includes features like syntax highlighting, autocomplete for variables, and error detection. -
18
ZeusDB
ZeusDB
ZeusDB is a next-generation, high-performance data platform designed to handle the demands of modern analytics, machine learning, real-time insights, and hybrid data workloads. It supports vector, structured, and time-series data in one unified engine, allowing recommendation systems, semantic search, retrieval-augmented generation pipelines, live dashboards, and ML model serving to operate from a single store. The platform delivers ultra-low latency querying and real-time analytics, eliminating the need for separate databases or caching layers. Developers and data engineers can extend functionality with Rust or Python logic, deploy on-premises, hybrid, or cloud, and operate under GitOps/CI-CD patterns with observability built in. With built-in vector indexing (e.g., HNSW), metadata filtering, and powerful query semantics, ZeusDB enables similarity search, hybrid retrieval, filtering, and rapid application iteration. -
19
Ultralytics
Ultralytics
Ultralytics offers a full-stack vision-AI platform built around its flagship YOLO model suite that enables teams to train, validate, and deploy computer-vision models with minimal friction. The platform allows you to drag and drop datasets, select from pre-built templates or fine-tune custom models, then export to a wide variety of formats for cloud, edge or mobile deployment. With support for tasks including object detection, instance segmentation, image classification, pose estimation and oriented bounding-box detection, Ultralytics’ models deliver high accuracy and efficiency and are optimized for both embedded devices and large-scale inference. The product also includes Ultralytics HUB, a web-based tool where users can upload their images/videos, train models online, preview results (even on a phone), collaborate with team members, and deploy via an inference API. -
20
Viduli
Viduli
Viduli empowers developers to deploy production-ready applications in minutes without DevOps expertise. Supporting 40+ languages and frameworks—from Python and Node.js to Go, Ruby, Java, and beyond—our platform eliminates complex configurations and steep learning curves. Core Services: Ignite - Deploy any application with zero configuration. Features automatic CI/CD from GitHub, auto-scaling, load balancing, health checks, and multi-region deployment. Every push triggers instant deployment. Orbit - Enterprise-grade managed databases in PostgreSQL. Built-in automated backups, point-in-time recovery, and read replicas ensure your data is always protected and performant. Flash - High-performance caching with Redis. Sub-millisecond latency, automatic failover, and data persistence accelerate your applications.Starting Price: $5/month -
21
RKTracer
RKVALIDATE
RKTracer is a code-coverage and test-analysis tool that enables teams to assess the quality and completeness of their testing across unit, integration, functional, and system-level testing, without altering a single line of application code or build workflow. It supports instrumentation across host machines, simulators, emulators, embedded devices, and servers, and covers a broad array of programming languages, including C, C++, CUDA, C#, Java, Kotlin, JavaScript/TypeScript, Golang, Python, and Swift. It provides detailed coverage metrics such as function, statement, branch/decision, condition, MC/DC, and multi-condition coverage, and even supports delta-coverage reports to show which newly added or modified portions of code are already covered. Integration is seamless; simply prefix your build or test command with “rktracer”, run your tests, then generate HTML or XML reports (for CI/CD systems or dashboards like SonarQube). -
22
GPT-5.1 Instant
OpenAI
GPT-5.1 Instant is a high-performance AI model designed for everyday users that combines speed, responsiveness, and improved conversational warmth. The model uses adaptive reasoning to instantly select how much computation is required for a task, allowing it to deliver fast answers without sacrificing understanding. It emphasizes stronger instruction-following, enabling users to give precise directions and expect consistent compliance. The model also introduces richer personality controls so chat tone can be set to Default, Friendly, Professional, Candid, Quirky, or Efficient, with experiments in deeper voice modulation. Its core value is to make interactions feel more natural and less robotic while preserving high intelligence across writing, coding, analysis, and reasoning. GPT-5.1 Instant routes user requests automatically from the base interface, with the system choosing whether this variant or the deeper “Thinking” model is applied. -
23
GPT-5.1 Thinking
OpenAI
GPT-5.1 Thinking is the advanced reasoning model variant in the GPT-5.1 series, designed to more precisely allocate “thinking time” based on prompt complexity, responding faster to simpler requests and spending more effort on difficult problems. On a representative task distribution, it is roughly twice as fast on the fastest tasks and twice as slow on the slowest compared with its predecessor. Its responses are crafted to be clearer, with less jargon and fewer undefined terms, making deep analytical work more accessible and understandable. The model dynamically adjusts its reasoning depth, achieving a better balance between speed and thoroughness, particularly when dealing with technical concepts or multi-step questions. By combining high reasoning capacity with improved clarity, GPT-5.1 Thinking offers a powerful tool for tackling complex tasks, such as detailed analysis, coding, research, or technical explanations, while reducing unnecessary latency for routine queries. -
24
Automata LINQ
Automata
LINQ is a fully integrated lab automation platform that empowers teams to build, run, and manage automated workcells and workflows with unmatched power and simplicity. Users can build workcells tailored to their needs using the modular hardware platform (LINQ Bench) that supports any instrument, fits any space, and scales without limitation. They can then develop workflows quickly and easily through a node-based workflow canvas or a fully featured Python SDK, enabling both no-code drag-and-drop workflow creation and code-based customization, simulation, testing, and iteration. With LINQ, you can start and manage runs using the intuitive run manager, monitor and control workcells remotely from anywhere, and benefit from robust error-handling and centralized management of multiple workcells through its cloud-native architecture. -
25
Gemini 3 Deep Think
Google
The most advanced model from Google DeepMind, Gemini 3, sets a new bar for model intelligence by delivering state-of-the-art reasoning and multimodal understanding across text, image, and video. It surpasses its predecessor on key AI benchmarks and excels at deeper problems such as scientific reasoning, complex coding, spatial logic, and visual-/video-based understanding. The new “Deep Think” mode pushes the boundaries even further, offering enhanced reasoning for very challenging tasks, outperforming Gemini 3 Pro on benchmarks like Humanity’s Last Exam and ARC-AGI. Gemini 3 is now available across Google’s ecosystem, enabling users to learn, build, and plan at new levels of sophistication. With context windows up to one million tokens, more granular media-processing options, and specialized configurations for tool use, the model brings better precision, depth, and flexibility for real-world workflows. -
26
Neurotechnology AI SDK
Neurotechnology
Neurotechnology AI SDK is a multilingual toolkit for creating speech-to-text and voice processing applications. It combines a proprietary ASR engine for accurate transcription with a Speaker Diarization engine that separates and labels individual speakers in an audio stream. Supporting English, Lithuanian, Latvian and Estonian, it delivers fast performance on CPUs and GPUs for real-time or batch processing. Designed for on-premises use, all audio is processed locally, ensuring full data privacy and control. Its modular architecture lets developers use each component independently or integrate them into stand-alone or client-server systems. Optional speaker recognition through voice biometrics can be added for stronger identity confirmation. The SDK supports Windows and Linux and provides native libraries for Python, C++, Java and .NET, making it suitable for transcription workflows, analytics platforms or voice-driven applications across a wide range of industries.Starting Price: €2500 -
27
Cegal Prizm
Cegal
Cegal Prizm is a modular solution designed to allow easy integration of data from different geo-applications, data sources and platforms into a Python environment. The modules allow you to combine geo-data sources for advanced analysis, visualization, data-science workflows, and machine-learning techniques. You can begin to solve problems that were not previously possible with legacy applications. Integrate modern Python technologies to extend, accelerate and augment standard workflows; create and securely distribute customized code, services and technology to a user community for consumption. Connect into the E&P software platform Petrel, OSDU, and other third-party applications and domains to access and retrieve energy data. Seamlessly transfer data locally or across hybrid and cloud deployments to a common Python environment to generate more insight and value. Prizm allows you to enrich datasets with additional application metadata to add more value and context to your analysis. -
28
Grok 4.1 Fast
xAI
Grok 4.1 Fast is the newest xAI model designed to deliver advanced tool-calling capabilities with a massive 2-million-token context window. It excels at complex real-world tasks such as customer support, finance, troubleshooting, and dynamic agent workflows. The model pairs seamlessly with the new Agent Tools API, which enables real-time web search, X search, file retrieval, and secure code execution. This combination gives developers the power to build fully autonomous, production-grade agents that plan, reason, and use tools effectively. Grok 4.1 Fast is trained with long-horizon reinforcement learning, ensuring stable multi-turn accuracy even across extremely long prompts. With its speed, cost-efficiency, and high benchmark scores, it sets a new standard for scalable enterprise-grade AI agents. -
29
AMD Developer Cloud provides developers and open-source contributors with immediate access to high-performance AMD Instinct MI300X GPUs through a cloud interface, offering a pre-configured environment with Docker containers, Jupyter notebooks, and no local setup required. Developers can run AI, machine-learning, and high-performance-computing workloads on either a small configuration (1 GPU with 192 GB GPU memory, 20 vCPUs, 240 GB system memory, 5 TB NVMe) or a large configuration (8 GPUs, 1536 GB GPU memory, 160 vCPUs, 1920 GB system memory, 40 TB NVMe scratch disk). It supports pay-as-you-go access via linked payment method and offers complimentary hours (e.g., 25 initial hours for eligible developers) to help prototype on the hardware. Users retain ownership of their work and can upload code, data, and software without giving up rights.
-
30
Dive
Dive
Dive CAE is a cloud-native computational fluid dynamics software platform that enables engineers to simulate complex fluid behaviors, such as free-surface flow, multiphase interactions, heat transfer, and moving machinery, using a mesh-free Smoothed Particle Hydrodynamics method. It runs entirely in the browser and on high-performance computing infrastructure, so users don’t need local hardware or installation. The mesh-free approach allows for modeling of complex geometry, surface tension, non-Newtonian fluids, and transient flows without the time-consuming meshing and tuning required by conventional CFD. Onboarding is fast (typically under one day), and the software supports parallel design-of-experiment workflows that deliver multiple iterations in hours rather than days. Dive CAE emphasizes collaboration, license simplicity (one licence for all users), transparent cost control, data usage governance, and scalability via cloud infrastructure. -
31
Checkmarx
Checkmarx
The Checkmarx Software Security Platform provides a centralized foundation for operating your suite of software security solutions for Static Application Security Testing (SAST), Interactive Application Security Testing (IAST), Software Composition Analysis (SCA), and application security training and skills development. Built to address every organization’s needs, the Checkmarx Software Security Platform provides the full scope of options: including private cloud and on-premises solutions. Allowing a range of implementation options ensures customers can start securing their code immediately, rather than going through long processes of adapting their infrastructure to a single implementation method. The Checkmarx Software Security Platform transforms the standard for secure application development, providing one powerful resource with industry-leading capabilities. -
32
gedit
The GNOME Project
gedit is the text editor of the GNOME desktop environment. The first goal of gedit is to be easy to use, with a simple interface by default. More advanced features are available by enabling plugins. A flexible plugin system which can be used to dynamically add new advanced features. -
33
CodePatrol
Claranet
Automated code reviews driven by security. CodePatrol performs powerful SAST scans on your project source code and identifies security flaws early. Powered by Claranet and Checkmarx. CodePatrol provides support for a wide variety of languages and scans your code with multiple SAST engines for better results. Stay up-to-date with the latest code flaws in your project using automated alerting and user-defined filter rules. CodePatrol uses industry-leading SAST software provided by Checkmarx and expertise from Claranet Cyber Security to identify the latest threat vectors. Multiple code scanning engines are frequently triggered on your code base and perform in-depth analysis on your project. You may access CodePatrol anytime and retrieve the aggregated scan results in order to fix your project security flaws. -
34
CodePeer
AdaCore
The Most Comprehensive Static Analysis Toolsuite for Ada. CodePeer helps developers gain a deep understanding of their code and build more reliable and secure software systems. CodePeer is an Ada source code analyzer that detects run-time and logic errors. It assesses potential bugs before program execution, serving as an automated peer reviewer, helping to find errors easily at any stage of the development life-cycle. CodePeer helps you improve the quality of your code and makes it easier for you to perform safety and/or security analysis. CodePeer is a stand-alone tool that runs on Windows and Linux platforms and may be used with any standard Ada compiler or fully integrated into the GNAT Pro development environment. It can detect several of the “Top 25 Most Dangerous Software Errors” in the Common Weakness Enumeration. CodePeer supports all versions of Ada (83, 95, 2005, 2012). CodePeer has been qualified as a Verification Tool under the DO-178B and EN 50128 software standards. -
35
Jtest
Parasoft
Meet Agile development cycles while maintaining high-quality code. Use Jtest’s comprehensive set of Java testing tools to ensure defect-free coding through every stage of software development in the Java environment. Streamline Compliance With Security Standards. Ensure your Java code complies with industry security standards. Have compliance verification documentation automatically generated. Release Quality Software, Faster. Integrate Java testing tools to find defects faster and earlier. Save time and money by mitigating complicated and expensive problems down the line. Increase Your Return From Unit Testing. Achieve code coverage targets by creating a maintainable and optimized suite of JUnit tests. Get faster feedback from CI and within your IDE using smart test execution. Parasoft Jtest integrates tightly into your development ecosystem and CI/CD pipeline for real-time, intelligent feedback on your testing and compliance progress. -
36
CodeSonar
CodeSecure
CodeSonar employs a unified dataflow and symbolic execution analysis that examines the computation of the complete application. By not relying on pattern matching or similar approximations, CodeSonar's static analysis engine is extraordinarily deep, finding 3-5 times more defects on average than other static analysis tools. Unlike many software development tools, such as testing tools, compilers, configuration management, etc., SAST tools can be integrated into a team's development process at any time with ease. SAST technologies like CodeSonar simply attach to your existing build environments to add analysis information to your verification process. Like a compiler, CodeSonar does a build of your code using your existing build environment, but instead of creating object code, CodeSonar creates an abstract model of your entire program. From the derived model, CodeSonar’s symbolic execution engine explores program paths, reasoning about program variables and how they relate. -
37
Codepad
Codepad
Codepad is a place for developers to share & save code snippets. It's a remarkable community of developers that can help you with your code snippets to save time on your projects. Share snippets with entire community. You can choose the programming language and the type of snippet: public, private or part private. Organise your code snippets in a beautiful way, easy add and categories them in collections. You can follow and control the snippets version. Don't lose the previous written code. If you are a freelancer or a company you can receive the job or collaboration offers directly to your registered email. Find the best developers on Codepad and follow their profile. You will see their new code snippets directly in your timeline. -
38
Jedi
Jedi
Jedi is a static analysis tool for Python that is typically used in IDEs/editors plugins. Jedi has a focus on autocompletion and goto functionality. Other features include refactoring, code search and finding references. Jedi has a simple API to work with. There is a reference implementation as a VIM-Plugin. Autocompletion in your REPL is also possible, IPython uses it natively and for the CPython REPL you can install it. Jedi is well tested and bugs should be rare. A Script is the base for completions, goto or whatever you want to do with Jedi. The counter part of this class is Interpreter, which works with actual dictionaries and can work with a REPL. This class should be used when a user edits code in an editor. Most methods have a line and a column parameter. Lines in Jedi are always 1-based and columns are always zero based. To avoid repetition they are not always documented. -
39
CudaText
CudaText
CudaText is a cross-platform text editor, written in Object Pascal. It is open source project and can be used free of charge, even for business. It starts quite fast on Linux on CPU Intel Core i3 3GHz. It is extensible by Python add-ons, plugins, linters, code tree parsers, external tools. Syntax parser is feature-rich, from EControl engine. Syntax highlight for lot of languages (270+ lexers). Code tree structure of functions/classes/etc, if lexer allows it. Code folding, multi-carets and multi-selections. Find/Replace with regular expressions. Configs in JSON format. Including lexer-specific configs. Tabbed UI, with a split view to primary/secondary, and a split window to 2/3/4/6 groups of tabs. Command palette, with fuzzy matching, minimap, and micromap. Shows unprinted whitespace and offers support for many encodings. Customizable hotkeys. Binary/Hex viewer for files of unlimited size (can show 10 Gb logs). -
40
Routefusion
Routefusion
Through our global banking APIs, fintech expertise, and customer support, decrease your time to market, reduce the cost of global payments and expand your products and services internationally. Opening up international bank accounts is challenging. We'll do it for you - making direct debiting, global reconciliation, and a unified global banking experience simple. Do international business like a local. Pay international vendors and customers with ACH, SPEI, SEPA, SWIFT wires, and more. Our global payment and FX institutions network gives your customers access to rates that have traditionally only been reserved for the largest corporations. With our new approach to cross-border payments, no one gets left behind. We believe in tearing down financial barriers to growth so that every business has the opportunity to thrive. -
41
Exceptionly
Exceptionly
We find, test, and deliver software talent for a direct hire. Exceptionly is built to revolutionize the software talent industry by leveraging its unique big data set of 2 million hands-on tested software engineers from 175 countries. Exceptionly invests in its enterprise-level talent acquisition engine and offers a platform as a service for providing both quality and volume of tested remote software engineers for businesses around the world. Exceptionly’s mission is to enable 100% of the technology talent capital for highly skilled remote talent around the world. We help businesses go beyond their zip codes for hiring the best of the best and leverage their talent budget in full. -
42
Zenlytic
Zenlytic
Your data lives in multiple excel files, ad platforms, and SaaS apps – they never agree and it’s impossible to make sense of. Your team wastes 30+ hrs a week combing through data across multiple instances without arriving at any insights you can trust. Zenlytic is the first enterprise-grade BI tool designed for emerging commerce brands like yours. We help you understand your data so you can acquire more efficiently, improve churn, and power growth. Any BI tool can tell you churn increased by 5% last month. Only Zenlytic can tell you why. Our tech quickly identifies the friction points in your user journeys, the promotions that aren’t converting and the acquisition channels that yield low LTV/CAC scores. When you know what’s working, and what’s not, all you have to do is act. Business intelligence tools have always been built for technical users who understand SQL. Not anymore. Our powerful natural language interface empowers everybody to be data-driven. -
43
AtomicJar
AtomicJar
Shift testing to the left and find issues earlier, when they are easier and cheaper to fix. Enable developers to do better integration testing, shorten dev cycles and increase productivity. Shorter and more-thorough integration feedback cycles, mean more reliable products. Testcontainers Cloud makes it easy for developers to run reliable integration tests, with real dependencies defined in code, from their laptops to their team’s CI. Testcontainers is an open-source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container. No more need for mocks or complicated environment configurations. Define your test dependencies as code, then simply run your tests and containers will be created and then deleted. -
44
ConTEXT Editor
ConTEXT Editor
ConTEXT is small, fast and powerful text editor for software developers. Unlimited open files, unlimited editing file size length, powerful syntax highlighting for C/C++, Delphi/Pascal, 80x86 assembler, Java, Java Script, Visual Basic, Perl/CGI, HTML, SQL, Python, PHP, Tcl/Tk, user definable syntax highlighter, project workspaces, compiler integration, multi-language support and many more features.Starting Price: $0 -
45
Stenography
Stenography
No need to Google it. Hydrate responses with Stack Overflow Suggestions and documentation from across the web. Extensions, extensions, extensions. Wherever code can be found, Stenography integrates. Stenography uses a passthrough API and does not store code. Your code stays on your system. -
46
CodeCollab
CodeCollab
Real-time code collaboration. CodeCollab is an online real-time collaborative code editor and compiler. Our web-based application allows users to collaborate in real-time over the internet. CodeCollab allows for seamless sharing across multiple platforms and devices. Perfect for keeping code up to date with your team. -
47
gProfiler
Granulate
gProfiler combines multiple sampling profilers to produce unified visualization of what your CPU is spending time on, displaying stack traces of your processes across native programs (includes Golang), Java and Python runtimes. gProfiler can upload its results to the Granulate Performance Studio, which aggregates the results from different instances over different periods of time and can give you a holistic view of what is happening on your entire cluster. To upload results, you will have to register and generate a token on the website. -
48
CubicWeb
CubicWeb
Modeling your data is the first step, as it always should be because applications fade away but data is here to stay. Once your model is implemented, your CubicWeb application runs and you can incrementally add high-value functionalities for your users. Based on the application model, RQL is a compact language focused on the attributes and relationships of the data. It is similar to SPARQL but is more readable by human beings. After a RQL request has selected a graph of data, several views can be applied to display the information in the most relevant way. All CubicWeb architecture is designed along this pattern. Permissions are directly defined in the data model with limitless precision. Security checks are automatically added to any RQL request submitted to the engine. CubicWeb relies on a standard SQL database for storing and managing data. PostgreSQL is the preferred database of CubicWeb. -
49
CodeT5
Salesforce
Code for CodeT5, a new code-aware pre-trained encoder-decoder model. Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. This is the official PyTorch implementation for the EMNLP 2021 paper from Salesforce Research. CodeT5-large-ntp-py is specially optimized for Python code generation tasks and employed as the foundation model for our CodeRL, yielding new SOTA results on the APPS Python competition-level program synthesis benchmark. This repo provides the code for reproducing the experiments in CodeT5. CodeT5 is a new pre-trained encoder-decoder model for programming languages, which is pre-trained on 8.35M functions in 8 programming languages (Python, Java, JavaScript, PHP, Ruby, Go, C, and C#). In total, it achieves state-of-the-art results on 14 sub-tasks in a code intelligence benchmark - CodeXGLUE. Generate code based on the natural language description. -
50
Codey
Google
Codey accelerates software development with real-time code completion and generation, customizable to a customer’s own codebase. This code generation model supports 20+ coding languages, including Go, Google Standard SQL, Java, Javascript, Python, and Typescript. It enables a wide variety of coding tasks, helping developers to work faster and close skills gaps through: Code completion: Codey suggests the next few lines based on the context of code entered into the prompt. Code generation: Codey generates code based on natural language prompts from a developer. Code chat: Codey lets developers converse with a bot to get help with debugging, documentation, learning new concepts, and other code-related questions.