Best Data Management Software for PostgreSQL - Page 14

Compare the Top Data Management Software that integrates with PostgreSQL as of December 2025 - Page 14

This a list of Data Management software that integrates with PostgreSQL. Use the filters on the left to add additional filters for products that have integrations with PostgreSQL. View the products that work with PostgreSQL in the table below.

  • 1
    Ndustrial Contxt
    We deliver an open platform that enables companies across multiple industries to digitally transform and gain a new level of insight into their business for a sustained competitive advantage. Our software solution is comprised of Contxt, a scalable, real-time industrial platform that serves as the code data engine, and Nsight, our data integration and intelligent insights application. Along the way, we provide extensive service and support. At the foundation of our software solution is Contxt, our scalable data management engine for industrial optimization. Contxt is built on the foundation of our industry-leading ETLT technology that enables sub-15-second data availability to any transaction that has happened across a variety of disparate data sources. Contxt allows developers to create a real-time digital twin that can deliver live data to all the applications and optimizations or any analysis across the organization, enabling meaningful business impact.
  • 2
    Velotix

    Velotix

    Velotix

    Velotix empowers organizations to maximize the value of their data while ensuring security and compliance in a rapidly evolving regulatory landscape. The Velotix Data Security Platform offers automated policy management, dynamic access controls, and comprehensive data discovery, all driven by advanced AI. With seamless integration across multi-cloud environments, Velotix enables secure, self-service data access, optimizing data utilization without compromising on governance. Trusted by leading enterprises across financial services, healthcare, telecommunications, and more, Velotix is reshaping data governance for the ‘need to share’ era.
  • 3
    Arroyo

    Arroyo

    Arroyo

    Scale from zero to millions of events per second. Arroyo ships as a single, compact binary. Run locally on MacOS or Linux for development, and deploy to production with Docker or Kubernetes. Arroyo is a new kind of stream processing engine, built from the ground up to make real-time easier than batch. Arroyo was designed from the start so that anyone with SQL experience can build reliable, efficient, and correct streaming pipelines. Data scientists and engineers can build end-to-end real-time applications, models, and dashboards, without a separate team of streaming experts. Transform, filter, aggregate, and join data streams by writing SQL, with sub-second results. Your streaming pipelines shouldn't page someone just because Kubernetes decided to reschedule your pods. Arroyo is built to run in modern, elastic cloud environments, from simple container runtimes like Fargate to large, distributed deployments on the Kubernetes logo Kubernetes.
  • 4
    Stratio

    Stratio

    Stratio

    A unified secure business data layer providing instant answers for business and data teams. Stratio generative AI data fabric covers the whole lifecycle of data management from data discovery, and governance, to use and disposal. Your organization has data all over the place, different divisions use different apps to do different things. Stratio uses AI to find all your data, whether it's on-prem or in the cloud. That means you can be sure that you're treating data appropriately. If you can't see your data as soon as its generated, you´ll never move as fast as your customers. With most data infrastructure, it can take hours to process customer data. Stratio accesses 100% of your data in real-time without moving it, so you can act quickly without losing the all-important context. Only by unifying the operational and informational in a collaborative platform companies will be able to move to instant extended AI.
  • 5
    Chat2DB

    Chat2DB

    Chat2DB

    Save time by working with data. Connect to all your data sources, and instantly generate optimal SQL for fast lightning information. If you don't know SQL well, you can get instant information without writing SQL. Generate high-performance SQL for your complicated queries using natural language, as well as correcting errors and getting AI suggestions to optimize the performance of SQL queries. Developers can write complex SQL queries quickly and accurately with the help of the AI SQL editor, saving time and improving development efficiency. Just enter the names of the tables and columns, and we will automatically configure the type, password, and comment, saving you 90% of the time. Imports and exports data in multiple formats (CSV, XLSX, XLS, SQL) to facilitate exchange, backup, and migration. Transfers data between different databases or through cloud services, as a backup and recovery solution that guarantees the minimum loss of data and downtime during migrations.
    Starting Price: $7 per month
  • 6
    Gable

    Gable

    Gable

    Data contracts facilitate communication between data teams and developers. Don’t just detect problematic changes, prevent them at the application level. Detect every change, from every data source using AI-based asset registration. Drive the adoption of data initiatives with upstream visibility and impact analysis. Shift left both data ownership and management through data governance as code and data contracts. Build data trust through the timely communication of data quality expectations and changes. Eliminate data issues at the source by seamlessly integrating our AI-driven technology. Everything you need to make your data initiative a success. Gable is a B2B data infrastructure SaaS that provides a collaboration platform to author and enforce data contracts. ‘Data contracts’, refer to API-based agreements between the software engineers who own upstream data sources and data engineers/analysts that consume data to build machine learning models and analytics.
  • 7
    Salesforce Data Cloud
    Salesforce Data Cloud is a real-time data platform designed to unify and manage customer data from multiple sources across an organization, enabling a single, comprehensive view of each customer. It allows businesses to collect, harmonize, and analyze data in real time, creating a 360-degree customer profile that can be leveraged across Salesforce’s various applications, such as Marketing Cloud, Sales Cloud, and Service Cloud. This platform enables faster, more personalized customer interactions by integrating data from online and offline channels, including CRM data, transactional data, and third-party data sources. Salesforce Data Cloud also offers advanced AI gents and analytics capabilities, helping organizations gain deeper insights into customer behavior and predict future needs. By centralizing and refining data for actionable use, Salesforce Data Cloud supports enhanced customer experiences, targeted marketing, and efficient, data-driven decision-making across departments.
  • 8
    Adaptive

    Adaptive

    Adaptive

    Adaptive is a data security platform designed to prevent sensitive data exposure across all human and non-human entities. It offers a secure control plane to protect and access data, featuring an agentless architecture that requires zero network reconfiguration and can be deployed in the cloud or on-premises. The platform enables organizations to share privileged access to data sources without sharing actual credentials, enhancing security posture. It supports just-in-time access to various data sources, including databases, cloud infrastructure resources, data warehouses, and web services. Adaptive also facilitates non-human data access by connecting third-party tools or ETL pipelines through a central interface without exposing data source credentials. To minimize data exposure, the platform provides data masking and tokenization for non-privileged users without altering access workflows. Comprehensive audibility is achieved through identity-based audit trails across all resources.
  • 9
    EDB Postgres AI
    A modern Postgres data platform for operators, developers, data engineers, and AI builders powering mission-critical workloads from edge to core. Flexible deployment across hybrid and multi-cloud. EDB Postgres AI is the first intelligent data platform for transactional, analytical, and new AI workloads powered by an enhanced Postgres engine. It can be deployed as a cloud-managed service, self-managed software, or as a physical appliance. It delivers built-in observability, AI-driven assistance, migration tooling, and a single pane of glass for managing hybrid data estates. EDB Postgres AI helps elevate data infrastructure to a strategic technology asset by bringing analytical and AI systems closer to customers’ core operational and transactional data, all managed through the world’s most popular open source database, Postgres. Modernize from legacy systems, with the most comprehensive Oracle compatibility for Postgres, and a suite of migration tooling to get customers onboard.
  • 10
    Unity Catalog

    Unity Catalog

    Databricks

    Databricks Unity Catalog is the industry’s only unified and open governance solution for data and AI, built into the Databricks Data Intelligence Platform. With Unity Catalog, organizations can seamlessly govern both structured and unstructured data in any format, as well as machine learning models, notebooks, dashboards, and files across any cloud or platform. Data scientists, analysts, and engineers can securely discover, access, and collaborate on trusted data and AI assets across platforms, leveraging AI to boost productivity and unlock the full potential of the lakehouse environment. This unified and open approach to governance promotes interoperability and accelerates data and AI initiatives while simplifying regulatory compliance. Easily discover and classify both structured and unstructured data in any format, including machine learning models, notebooks, dashboards, and files across all cloud platforms.
  • 11
    Sequelize

    Sequelize

    Sequelize

    Sequelize is a modern TypeScript and Node.js ORM for Oracle, Postgres, MySQL, MariaDB, SQLite SQL Server, and more. Features solid transaction support, relations, eager and lazy loading, read replication, and more. Define your models with ease and make optional use of automatic database synchronization. Define associations between models and let Sequelize handle the heavy lifting. Mark data as deleted instead of removing it once and for all from the database. Transactions, migrations, strong typing, JSON querying, lifecycle events (hooks), and more. Sequelize is a promise-based Node.js ORM tool for Postgres, MySQL, MariaDB, SQLite, Microsoft SQL Server, Oracle Database, Amazon Redshift, and Snowflake’s Data Cloud. It features solid transaction support, relations, eager and lazy loading, read replication, and more. To connect to the database, you must create a Sequelize instance.
  • 12
    TROCCO

    TROCCO

    primeNumber Inc

    TROCCO is a fully managed modern data platform that enables users to integrate, transform, orchestrate, and manage their data from a single interface. It supports a wide range of connectors, including advertising platforms like Google Ads and Facebook Ads, cloud services such as AWS Cost Explorer and Google Analytics 4, various databases like MySQL and PostgreSQL, and data warehouses including Amazon Redshift and Google BigQuery. The platform offers features like Managed ETL, which allows for bulk importing of data sources and centralized ETL configuration management, eliminating the need to manually create ETL configurations individually. Additionally, TROCCO provides a data catalog that automatically retrieves metadata from data analysis infrastructure, generating a comprehensive catalog to promote data utilization. Users can also define workflows to create a series of tasks, setting the order and combination to streamline data processing.
  • 13
    Lumi AI

    Lumi AI

    Lumi AI

    ​Lumi AI is an enterprise analytics platform that enables users to explore data and extract custom insights through natural language queries, eliminating the need for SQL or Python expertise. It offers self-service analytics, conversational analytics, customizable visualizations, knowledge management, seamless integrations, and robust security features. Lumi AI supports diverse teams, including data analysis, supply chain management, procurement, sales, merchandising, and financial planning, by providing actionable insights tailored to unique business terms and metrics. Its agentic workflows address simple to complex queries, uncover root causes, and facilitate complex analyses, all while interpreting business-specific language. Lumi AI integrates effortlessly with various data sources, ensuring enterprise-grade security by processing data within the client's network and offering advanced user permissions and query controls. ​
  • 14
    Floqer

    Floqer

    Floqer

    Floqer is the leading CRM data enrichment tool that enriches with trusted data from over 75 industry‑leading sources and employs AI agents to automate the manual research that GTM teams shouldn’t be doing. As a single source of truth, it eliminates duplicates and gaps by enriching every lead the moment it enters your CRM and consolidating pipelines from every source into one clean record. Prospect research runs on autopilot, surfacing key buyer context and next‑step signals without manual work, while real‑time enrichment and an intuitive interface ensure revenue teams always have up‑to‑date, accurate insights to drive smarter outreach and decision‑making.
  • 15
    Sangfor Database Management Platform
    Sangfor DMP is a unified database management solution built on the Sangfor Cloud Platform that simplifies operations and maintenance across the entire lifecycle of relational databases by automating deployment, ensuring high availability through built‑in fault‑handling mechanisms, and safeguarding critical data with robust, transaction‑consistent backups and one‑click disaster recovery failover. It consolidates management into a single console to streamline monitoring, patching, maintenance workflows, and centralized oversight of existing instances, while enforcing standardized best practices to prevent misconfigurations. Real‑time health monitoring and alerting empower proactive issue resolution, and automated operations free IT resources for strategic initiatives. Use cases range from rapid, error‑free database provisioning and centralized management to comprehensive backup and recovery operations that minimize downtime.
  • 16
    PipelineDB

    PipelineDB

    PipelineDB

    PipelineDB is a PostgreSQL extension for high-performance time-series aggregation, designed to power realtime reporting and analytics applications. PipelineDB allows you to define continuous SQL queries that perpetually aggregate time-series data and store only the aggregate output in regular, queryable tables. You can think of this concept as extremely high-throughput, incrementally updated materialized views that never need to be manually refreshed. Raw time-series data is never written to disk, making PipelineDB extremely efficient for aggregation workloads. Continuous queries produce their own output streams, and thus can be chained together into arbitrary networks of continuous SQL.
  • 17
    DataSift

    DataSift

    DataSift

    Extract insights from a universe of human-created data. With data from social networks, blogs, news, and more. Integrate social, blog and news data in a single place. Real-time and historic data from billions of data points. Normalized and enriched data in real-time for accurate analysis. With DataSift, you can deliver Human Data into business intelligence (BI) tools and business processes in real-time. You can also innovate with our powerful API to build your own apps. Human Data is the fastest growing type of data that covers the entire spectrum of human-generated information regardless of format or channel through which it is delivered. It includes text, image, audio or video shared with other people on social networks, blogs, news content and inside the business. The DataSift Human Data platform unifies all the data - real-time and historical - in one place, unlocks its meaning and delivers it for use anywhere in the business.
  • 18
    EDB Postgres Advanced Server
    An enhanced version of PostgreSQL that is continuously synchronized with PostgreSQL's with enhancements for Security, DBA and Developer features and Oracle database compatibility. Manage deployment, high availability and automated failover from Kubernetes. Deploy anywhere with lightweight, immutable Postgres containers. Automate with failover, switchover, backup, recovery, and rolling updates. Operator and images are portable to any cloud so you can avoid lock-in. Overcome containerization and Kubernetes challenges with our experts. Oracle compatibility means you can leave your legacy database without starting over. Migrate database and client applications faster with fewer rewrite problems. Improve the end-user experience by tuning and boosting performance. Deploy on-premises, in the cloud, or both. In a world where downtime means revenue loss, High Availability is key for business continuity.
    Starting Price: $1000.00/one-time
  • 19
    TrendMiner

    TrendMiner

    TrendMiner

    TrendMiner is a fast, powerful and intuitive advanced industrial analytics platform designed for real-time monitoring and troubleshooting of industrial processes. It provides robust data collection, analysis, and visualization enabling everyone in industrial operations for making smarter data-driven decisions efficiently to accelerate innovation, optimization, and sustainable growth. TrendMiner, a Proemion company, is founded in 2008 with our global headquarters located in Belgium, and offices in the U.S., Germany, Spain and the Netherlands. TrendMiner has strategic partnerships with all major players such as Amazon, Microsoft, SAP, GE Digital, Siemens and Aveva, and offers standard integrations with a wide range of historians such as OSIsoft PI, Yokogawa Exaquantum, AspenTech IP.21, Honeywell PHD, GE Proficy Historian and Wonderware InSQL.
  • 20
    WhereScape

    WhereScape

    WhereScape Software

    WhereScape helps IT organizations of all sizes leverage automation to design, develop, deploy, and operate data infrastructure faster. More than 700 customers worldwide rely on WhereScape automation to eliminate hand-coding and other repetitive, time-intensive aspects of data infrastructure projects to deliver data warehouses, vaults, lakes and marts in days or weeks rather than in months or years. From data warehouses and vaults to data lakes and marts, deliver data infrastructure and big data integration fast. Quickly and easily plan, model and design all types of data infrastructure projects. Use sophisticated data discovery and profiling capabilities to bulletproof design and rapid prototyping to collaborate earlier with business users. Fast-track the development, deployment and operation of your data infrastructure projects. Dramatically reduce the delivery time, effort, cost and risk of new projects, and better position projects for future business change.
  • 21
    bit.io

    bit.io

    bit.io

    A single, giant, secure Postgres database. Public and private data. Add data in seconds. Query across all the data. Share with one click. Click Upload, or simply drag your dataset onto bit.io to upload tables for your repos. Add friends, create teams, or share a link to your repo with specific access levels. Hopping around the site and see some data you like? Query it, whenever, wherever. If the repo is public, you can query the data in a snap. Here’s a few examples of the various ways to use data in bit.io. Secure by default, Stop accidentally leaving your database open to the internet with a default password, or no password at all. With bit.io, your data is secure in motion (with TLS/SSL) and at rest (encrypted on the disk). Revoke access with the click of a button rather than an arcane SQL statement. Check out our sample repo! feel free to play around and create your own repo using our data.
  • 22
    DTM Data Generator

    DTM Data Generator

    DTM Data Generator

    Fast test data generation engine with about 70 built-in functions and expression processor enables users to define complex test data with dependencies, internal structure, and relationships. The product analyzes existing database schema and resolves master-detail key structure (relationships) automatically. Value Library is a predefined data sets: names, countries, cities, streets, currencies, companies, industries, departments, regions, etc. Variables and Named Generators features provide a way to share data generation properties to similar columns. Intelligent schema analyzer makes your data realistic without extra project modifications and "data by example" feature makes data more realistic without extra efforts.
  • 23
    Singer

    Singer

    Singer

    Singer describes how data extraction scripts called “taps” and data loading scripts called “targets” should communicate, allowing them to be used in any combination to move data from any source to any destination. Send data between databases, web APIs, files, queues, and just about anything else you can think of. Singer taps and targets are simple applications composed with pipes—no daemons or complicated plugins needed. Singer applications communicate with JSON, making them easy to work with and implement in any programming language. Singer also supports JSON Schema to provide rich data types and rigid structure when needed. Singer makes it easy to maintain state between invocations to support incremental extraction.
  • 24
    Conduit

    Conduit

    Conduit

    Sync data between your production systems using an extensible, event-first experience with minimal dependencies that fit within your existing workflow. Eliminate the multi-step process you go through today. Just download the binary and start building. Conduit pipelines listen for changes to a database, data warehouse, etc., and allows your data applications to act upon those changes in real-time. Conduit connectors give you the ability to pull and push data to any production datastore you need. If a datastore is missing, the simple SDK allows you to extend Conduit where you need it. Run it in a way that works for you; use it as a standalone service or orchestrate it within your infrastructure.
  • 25
    QDeFuZZiner

    QDeFuZZiner

    QDeFuZZiner

    Project is basic entity in QDeFuZZiner software. Each project contains definition of two source datasets to be imported and analyzed (so-called "left dataset" and "right dataset"), as well as variable number of corresponding solutions, which are stored definitions of how to perform fuzzy match analysis. On creation, each project is assigned unique project tag. During raw data importing to server, corresponding input tables get that tag appended in their name. This way, imported tables are always tagged by the project name, which ensures their uniqueness. During importing and also later on, during solutions creation and execution, QDeFuZZiner is creating various indexes on the underlying PostgreSQL database, which facilitate fuzzy data matching. Datasets are imported from source spreadsheet (.xlsx, .xls, .ods) or CSV (comma separated values) flat files to server database, where corresponding left and right database tables are then created, indexed and processed.
  • 26
    SSIS Integration Toolkit
    Jump right to our product page to see our full range of data integration software, including solutions for SharePoint and Active Directory. With over 300 individual data integration tools for connectivity and productivity, our data integration solutions allow developers to take advantage of the flexibility and power of the SSIS ETL engine to integrate virtually any application or data source. You don't have to write a single line of code to make data integration happen so your development can be done in a matter of minutes. We make the most flexible integration solution on the market. Our software offers intuitive user interfaces that are flexible and easy to use. With a streamlined development experience and an extremely simple licensing model, our solution offers the best value for your investment. Our software offers many specifically designed features that help you achieve the best possible performance without having to hijack your budget.
  • 27
    Channel

    Channel

    Channel AI

    Ask any data question, in plain English. Connect your database, ask a question, get an answer. Get the answers you need without knowing SQL. Self serve your data insights, finally. Query in plain english. No matter how complex your warehouse, Channel learns how to get the answers you need from just plain English. Beautiful visualization. Channel automatically generates beautiful visualizations for your data, and picks the right chart type based on your preferences. Self service, for real. Channel is designed to be used by anyone, from analysts to product managers. No more waiting for the data you need. Answer the questions you should be asking. Channel surfaces the insights you didn't know you needed, by analyzing your warehouse ahead of time. Combine your knowledge. Channel learns from every questions it's ever asked, and prompts you to ask the questions that really matter. Shared definitions. Keep track of how important terms are defined across your organization.
  • 28
    Fujitsu Enterprise Postgres
    Fujitsu Enterprise Postgres is an exceptionally reliable and robust relational database, created for organizations that require strong query performance and high availability. It is based on the world-renowned open-source system — PostgreSQL, with additional enterprise-grade features for enhanced security and better performance. Fujitsu Enterprise Postgres is installed and operated by Fujitsu’s database experts who can also assist with the migration of data from your existing database. Based on PostgreSQL, FEP is highly compatible with other systems and applications. The simple, clean graphic user interface improves the experience for DBAs executing core database functions, such as running queries, scanning, and back-up, making your data and reporting more accessible.
  • 29
    BigBI

    BigBI

    BigBI

    BigBI enables data specialists to build their own powerful big data pipelines interactively & efficiently, without any coding! BigBI unleashes the power of Apache Spark enabling: Scalable processing of real Big Data (up to 100X faster) Integration of traditional data (SQL, batch files) with modern data sources including semi-structured (JSON, NoSQL DBs, Elastic, Hadoop), and unstructured (Text, Audio, video), Integration of streaming data, cloud data, AI/ML & graphs
  • 30
    Ariga

    Ariga

    Ariga

    Define the desired schema of your database in a familiar and declarative syntax. Our open-source engine automatically generates migration plans that are verified during your team’s existing CI pipeline and safely deployed to production. Our platform rigorously simulates and analyzes each proposed change during your team’s existing CI flow to ensure no one accidentally breaks the database. Catch destructive changes, backward-incompatibility issues, accidental table locks and constraint violations way before they reach production. Deploy database schema changes as part of your continuous delivery pipelines with our Terraform or Helm integrations. Using our platform, you can safely execute changes that cannot be completely verified during CI, as they depend on the data in the target database.