Best Data Management Software for Docker - Page 4

Compare the Top Data Management Software that integrates with Docker as of October 2025 - Page 4

This a list of Data Management software that integrates with Docker. Use the filters on the left to add additional filters for products that have integrations with Docker. View the products that work with Docker in the table below.

  • 1
    Bitfount

    Bitfount

    Bitfount

    Bitfount is a platform for distributed data science. We power deep data collaborations without data sharing. Distributed data science sends algorithms to data, instead of the other way around. Set up a federated privacy-preserving analytics and machine learning network in minutes, and let your team focus on insights and innovation instead of bureaucracy. Your data team has the skills to solve your biggest challenges and innovate, but they are held back by barriers to data access. Is complex data pipeline infrastructure messing with your plans? Are compliance processes taking too long? Bitfount has a better way to unleash your data experts. Connect siloed and multi-cloud datasets while preserving privacy and respecting commercial sensitivity. No expensive, time-consuming data lift-and-shift. Usage-based access controls to ensure teams only perform the analysis you want, on the data you want. Transfer management of access controls to the teams who control the data.
  • 2
    QueryPie

    QueryPie

    QueryPie

    QueryPie is a centralized platform to manage scattered data sources and security policies all in one place. Put your company on the fast track to success without changing the existing data environment. Data governance is vital to today's data-driven world. Ensure you're on the right side of data governance standards while giving many users access to growing amounts of critical information. Establish data access policies by including key attributes such as IP address and access time. Privilege types can be created based on SQL commands classified as DML, DCL, and DDL to secure data analysis and editing. Manage details of SQL events at a glance and discover user behavior and potential security concerns by browsing logs based on permissions. All histories can be exported as a file and used for reporting purposes.
  • 3
    NocoDB

    NocoDB

    NocoDB

    NocoDB is an open-source platform that turns any database into a smart spreadsheet. Create unlimited grid view, gallery view, form view from your own data. Search, sort, filter columns and rows with ultra ease. Share views publicly and also with password protection. Turn software consumers into software producers within each organization.
  • 4
    Kestra

    Kestra

    Kestra

    Kestra is an open-source, event-driven orchestrator that simplifies data operations and improves collaboration between engineers and business users. By bringing Infrastructure as Code best practices to data pipelines, Kestra allows you to build reliable workflows and manage them with confidence. Thanks to the declarative YAML interface for defining orchestration logic, everyone who benefits from analytics can participate in the data pipeline creation process. The UI automatically adjusts the YAML definition any time you make changes to a workflow from the UI or via an API call. Therefore, the orchestration logic is defined declaratively in code, even if some workflow components are modified in other ways.
  • 5
    Celantur

    Celantur

    Celantur

    Automatically anonymize faces, license plates, bodies, and vehicles, easy to use and integrate on all platforms. Solve privacy challenges for a wide range of commercial and industrial use cases. Global industry leaders put trust in our products and expertise. We are solving anonymization challenges, so you can focus on your core business. Our team is on your side, helping you on your privacy journey. Data Protection is our core business, and that's why we have strong measures in place to comply with the GDPR and other data protection laws. You can use our cloud service, where all the processing is done on our infrastructure. Or use our Docker container to deploy it on-premise or in your private/public cloud environment. We charge a fee per image or video hour, and you can create a demo account and test it for free. Blur faces, license plates, persons and vehicles on images in seconds with a simple REST call.
  • 6
    Undb

    Undb

    Undb

    Undb is delighted to meet you all. We hope you can try it out and provide valuable feedback and suggestions. Undb is currently in the early stages of open source development and is not recommended for production use. Please note that once the demo website is redeployed, the previous table data will be cleared. The Undb open-source spreadsheet platform provides a visual drag-and-drop interface and configurable operations, allowing anyone to quickly build their business systems, process various data, and optimize business logic. Undb can render tens of thousands of data in just a few seconds. Undb is private first, you can deploy your instance of undb and handle your data locally. Using SQLite and local object storage by default requires only one file to persist your data. Many built-in field types and configurable.
  • 7
    Ndustrial Contxt
    We deliver an open platform that enables companies across multiple industries to digitally transform and gain a new level of insight into their business for a sustained competitive advantage. Our software solution is comprised of Contxt, a scalable, real-time industrial platform that serves as the code data engine, and Nsight, our data integration and intelligent insights application. Along the way, we provide extensive service and support. At the foundation of our software solution is Contxt, our scalable data management engine for industrial optimization. Contxt is built on the foundation of our industry-leading ETLT technology that enables sub-15-second data availability to any transaction that has happened across a variety of disparate data sources. Contxt allows developers to create a real-time digital twin that can deliver live data to all the applications and optimizations or any analysis across the organization, enabling meaningful business impact.
  • 8
    Arroyo

    Arroyo

    Arroyo

    Scale from zero to millions of events per second. Arroyo ships as a single, compact binary. Run locally on MacOS or Linux for development, and deploy to production with Docker or Kubernetes. Arroyo is a new kind of stream processing engine, built from the ground up to make real-time easier than batch. Arroyo was designed from the start so that anyone with SQL experience can build reliable, efficient, and correct streaming pipelines. Data scientists and engineers can build end-to-end real-time applications, models, and dashboards, without a separate team of streaming experts. Transform, filter, aggregate, and join data streams by writing SQL, with sub-second results. Your streaming pipelines shouldn't page someone just because Kubernetes decided to reschedule your pods. Arroyo is built to run in modern, elastic cloud environments, from simple container runtimes like Fargate to large, distributed deployments on the Kubernetes logo Kubernetes.
  • 9
    InformationGrid

    InformationGrid

    InformationGrid

    Most businesses are aware that data can help them to understand their customers better to improve processes and solve problems. We have developed a safe solution for sharing and aggregating data: InformationGrid. Get valuable insights with InformationGrid, a safe solution for sharing and aggregating data. A solution that allows hospitals and general practitioners to share data in order to prescribe better medications. Whether you have a clear idea of your data strategy or need some guidance defining this; we’re here to help you put your data strategy into action. Our SAAS platform enables you to share and aggregate data in a way that’s secure and cost-efficient. To keep time to value as short as possible we can build an application that can handle large amounts of data, and we can build it fast by using cloud-native techniques. This enables you to reap the short-term benefits of the data and get ready to start using AI to advance your business.
  • 10
    rqlite

    rqlite

    rqlite

    The lightweight, user-friendly, distributed relational database built on SQLite. Fault tolerance and high availability with zero hassle. rqlite is a distributed relational database that combines the simplicity of SQLite with the robustness of a fault-tolerant, highly available system. It's developer-friendly, its operation is straightforward, and it's designed for reliability with minimal complexity. Deploy in seconds, with no complex configurations. Seamlessly integrates with modern cloud infrastructures. Built on SQLite, the world’s most popular database. Supports full-text search, Vector Search, and JSON documents. Access controls and encryption for secure deployments. Rigorous, automated testing ensures high quality. Clustering provides high availability and fault tolerance. Automatic node discovery simplifies clustering.
  • 11
    FalkorDB

    FalkorDB

    FalkorDB

    ​FalkorDB is an ultra-fast, multi-tenant graph database optimized for GraphRAG, delivering accurate, relevant AI/ML results with reduced hallucinations and enhanced performance. It leverages sparse matrix representations and linear algebra to efficiently handle complex, interconnected data in real-time, resulting in fewer hallucinations and more accurate responses from large language models. FalkorDB supports the OpenCypher query language with proprietary enhancements, enabling expressive and efficient querying of graph data. It offers built-in vector indexing and full-text search capabilities, allowing for complex searches and similarity matching within the same database environment. FalkorDB's architecture includes multi-graph support, enabling multiple isolated graphs within a single instance, ensuring security and performance across tenants. It also provides high availability with live replication, ensuring data is always accessible.
  • 12
    Commvault HyperScale X
    Accelerate hybrid cloud adoption, scale-out as needed, and manage data workloads from a single intuitive platform. An intuitive scale-out solution that’s fully integrated with Commvault’s Intelligent Data Management platform. Accelerate your digital transformation journey with unmatched scalability, security, and resiliency. Simple, flexible data protection for all workloads including containers, virtual, and databases. Built-in resiliency ensures data availability during concurrent hardware failures. Data reuse via copy data management that provides instant recovery of VMs and live production copies for DevOps and testing. High-performance backup and recovery with automatic load balancing, enhanced RPO, and reduced RTO. Cost-optimized cloud data mobility to move data to, from, within, and between clouds. Disaster recovery testing of replicas directly from the hardware.
  • 13
    GenRocket

    GenRocket

    GenRocket

    Enterprise synthetic test data solutions. In order to generate test data that accurately reflects the structure of your application or database, it must be easy to model and maintain each test data project as changes to the data model occur throughout the lifecycle of the application. Maintain referential integrity of parent/child/sibling relationships across the data domains within an application database or across multiple databases used by multiple applications. Ensure the consistency and integrity of synthetic data attributes across applications, data sources and targets. For example, a customer name must always match the same customer ID across multiple transactions simulated by real-time synthetic data generation. Customers want to quickly and accurately create their data model as a test data project. GenRocket offers 10 methods for data model setup. XTS, DDL, Scratchpad, Presets, XSD, CSV, YAML, JSON, Spark Schema, Salesforce.
  • 14
    Code Ocean

    Code Ocean

    Code Ocean

    The Code Ocean Computational Workbench speeds usability, coding and data tool integration, and DevOps and lifecycle tasks by closing technology gaps with a highly intuitive, ready-to-use user experience. Ready-to-use RStudio, Jupyter, Shiny, Terminal, and Git. Choice of popular languages. Access to any size of data and storage type. Configure and generate Docker environments. One-click access to AWS compute resources. Using the Code Ocean Computational Workbench app panel researchers share results by generating and publishing easy-to-use, point-n-click, web analysis apps to teams of scientists without any IT, coding, or using the command line. Create and deploy interactive analysis. Used in standard web browsers. Easy to share and collaborate. Reuseable, easy to manage. Offering an easy-to-use application and repository researchers can quickly organize, publish, and secure project-based Compute Capsules, data assets, and research results.
  • 15
    Syntho

    Syntho

    Syntho

    Syntho typically deploys in the safe environment of our customers so that (sensitive) data never leaves the safe and trusted environment of the customer. Connect to the source data and target environment with our out-of-the-box connectors. Syntho can connect with every leading database & filesystem and supports 20+ database connectors and 5+ filesystem connectors. Define the type of synthetization you would like to run, realistically mask or synthesize new values, automatically detect sensitive data types. Utilize and share the protected data securely, ensuring compliance and privacy are maintained throughout its usage.
  • 16
    DataKitchen

    DataKitchen

    DataKitchen

    Reclaim control of your data pipelines and deliver value instantly, without errors. The DataKitchen™ DataOps platform automates and coordinates all the people, tools, and environments in your entire data analytics organization – everything from orchestration, testing, and monitoring to development and deployment. You’ve already got the tools you need. Our platform automatically orchestrates your end-to-end multi-tool, multi-environment pipelines – from data access to value delivery. Catch embarrassing and costly errors before they reach the end-user by adding any number of automated tests at every node in your development and production pipelines. Spin-up repeatable work environments in minutes to enable teams to make changes and experiment – without breaking production. Fearlessly deploy new features into production with the push of a button. Free your teams from tedious, manual work that impedes innovation.