81 Integrations with Activeeon ProActive

View a list of Activeeon ProActive integrations and software that integrates with Activeeon ProActive below. Compare the best Activeeon ProActive integrations as well as features, ratings, user reviews, and pricing of software that integrates with Activeeon ProActive. Here are the current Activeeon ProActive integrations in 2024:

  • 1
    Talend Data Integration
    Talend Data Integration lets you connect and manage all your data, no matter where it lives. Use more than 1,000 connectors and components to connect virtually any data source with virtually any data environment, in the cloud or on premises. Easily develop and deploy reusable data pipelines with a drag-and-drop interface that’s 10 times faster than hand-coding. Talend has always supported scaling massive data sets to advanced data analytics or Spark platforms. We also partner with leading cloud service providers, data warehouses, and analytics platforms, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, Snowflake, and Databricks. With Talend, data quality is embedded into every step of the data integration processes. Discover, highlight, and fix issues as data moves through your systems, before inconsistencies can disrupt or impact crucial decisions. Connect to data where it lives, use it where you need it.
  • 2
    Informatica Cloud Data Integration
    Ingest data with high-performance ETL, mass ingestion, or change data capture. Integrate data on any cloud, with ETL, ELT, Spark, or with a fully managed serverless option. Integrate any application, whether it’s on-premises or SaaS. Process petabytes of data up to 72x faster within your cloud ecosystem. See how you can use Informatica’s Cloud Data Integration to quickly start building high-performance data pipelines to meet any data integration need. Efficiently ingest databases, files, and streaming data for real-time data replication and streaming analytics. Integrate apps & data in real time with intelligent business processes that span cloud & on-premises sources. Easily integrate message- and event-based systems, queues, and topics with support for top tools. Connect to a wide range of applications (and any API) and integrate in real-time with APIs, messaging, and pub/sub support—no coding required.
  • 3
    Node.js

    Node.js

    Node.js

    As an asynchronous event-driven JavaScript runtime, Node.js is designed to build scalable network applications. Upon each connection, the callback is fired, but if there is no work to be done, Node.js will sleep. This is in contrast to today's more common concurrency model, in which OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node.js are free from worries of dead-locking the process, since there are no locks. Almost no function in Node.js directly performs I/O, so the process never blocks except when the I/O is performed using synchronous methods of Node.js standard library. Because nothing blocks, scalable systems are very reasonable to develop in Node.js. Node.js is similar in design to, and influenced by, systems like Ruby's Event Machine and Python's Twisted. Node.js takes the event model a bit further. It presents an event loop as a runtime construct instead of as a library.
  • 4
    LDAP Admin Tool
    The Professional Edition of LDAP Admin Tool contains more features like predefined customizable searches for both LDAP (common ldap objects one click searches) & Active Directory (over 200 common one click searches). This is the edition of LDAP Admin Tool you’ll want to use if you use your machine mainly in a professional setting. For example, most business users and administrators will need this edition to quickly search directory tree using one click searches and schedule export tasks. While assigning members to groups it is often necessary to know nested assignments. With our software's you can view the updated nested members of groups while assigning members to groups. SQLLDAP is easy sql like syntax to query and update LDAP. With our software's you are now able to build and edit query visually with a drag and drop function using keywords and attributes.
    Starting Price: $95 per year
  • 5
    JavaScript

    JavaScript

    JavaScript

    JavaScript is a scripting language and programming language for the web that enables developers to build dynamic elements on the web. Over 97% of the websites in the world use client-side JavaScript. JavaScript is one of the most important scripting languages on the web. Strings in JavaScript are contained within a pair of either single quotation marks '' or double quotation marks "". Both quotes represent Strings but be sure to choose one and STICK WITH IT. If you start with a single quote, you need to end with a single quote. There are pros and cons to using both IE single quotes tend to make it easier to write HTML within Javascript as you don’t have to escape the line with a double quote. Let’s say you’re trying to use quotation marks inside a string. You’ll need to use opposite quotation marks inside and outside of JavaScript single or double quotes.
  • 6
    OpenStack

    OpenStack

    OpenStack

    OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed and provisioned through APIs with common authentication mechanisms. A dashboard is also available, giving administrators control while empowering their users to provision resources through a web interface. Beyond standard infrastructure-as-a-service functionality, additional components provide orchestration, fault management and service management amongst other services to ensure high availability of user applications. OpenStack is broken up into services to allow you to plug and play components depending on your needs. The openstack map gives you an “at a glance” view of the openstack landscape to see where those services fit and how they can work together.
  • 7
    Tensor

    Tensor

    Tensor

    Tensor's mission is to become the trading venue for the pro-NFT trader. We started Tensor become we ourselves were flipping NFTs daily and weren't satisfied with existing tooling. We wanted something faster, with better coverage, more data, and advanced order types, and so Tensor was born. When you go to Tensor you'll find a single coherent dApp, but under the hood, we actually have a few moving parts. Bonding-curve-based orders: linear & exponential, lets you DCA into/out of NFTs! Instant new collection listings (we appreciate traders want to always trade the latest stuff). Earn trading fees & LP rewards by providing liquidity and creating markets for your favorite NFT collections on TensorSwap. Market-makers are important because they make markets more liquid, meaning they let other traders enter/exit the market at a more favorable price.
  • 8
    Horovod

    Horovod

    Horovod

    Horovod was originally developed by Uber to make distributed deep learning fast and easy to use, bringing model training time down from days and weeks to hours and minutes. With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve.
    Starting Price: Free
  • 9
    H2O.ai

    H2O.ai

    H2O.ai

    H2O.ai is the open source leader in AI and machine learning with a mission to democratize AI for everyone. Our industry-leading enterprise-ready platforms are used by hundreds of thousands of data scientists in over 20,000 organizations globally. We empower every company to be an AI company in financial services, insurance, healthcare, telco, retail, pharmaceutical, and marketing and delivering real value and transforming businesses today.
  • 10
    KNIME Analytics Platform
    One enterprise-grade software platform, two complementary tools. Open source KNIME Analytics Platform for creating data science and commercial KNIME Server for productionizing data science. KNIME Analytics Platform is the open source software for creating data science. Intuitive, open, and continuously integrating new developments, KNIME makes understanding data and designing data science workflows and reusable components accessible to everyone. KNIME Server is the enterprise software for team-based collaboration, automation, management, and deployment of data science workflows as analytical applications and services. Non experts are given access to data science via KNIME WebPortal or can use REST APIs. Do even more with your data using extensions for KNIME Analytics Platform. Some are developed and maintained by us at KNIME, others by the community and our trusted partners. We also have integrations with many open source projects.
  • 11
    NETSOL

    NETSOL

    NETSOL Technologies

    Welcome to the future of financial services with NETSOL. Based on next-generation technology, our platform offers solutions for end-to-end asset finance and leasing for seamless retail and wholesale operations, digital retail and out-of-the-box, API-first products for the global financial services industry. Our platform revolutionizes your operations, from originations to servicing, and adapts to your needs, empowering you to navigate today’s dynamic and changing landscape with ease. Effectively manage your complex multi-site and multi-currency operations and enable your organization to thrive in hyper competitive markets globally. Utilizing the power of AI and data analytics, we enable users to track performance, identify trends and make data-driven decisions to optimize processes. Our platform is a global system that meets local requirements. It can be used in multi-national, multi-company, multi-asset, multi-lingual, multi-distributor and multi-manufacturer environments.
  • 12
    Oracle Database
    Oracle database products offer customers cost-optimized and high-performance versions of Oracle Database, the world's leading converged, multi-model database management system, as well as in-memory, NoSQL, and MySQL databases. Oracle Autonomous Database, available on-premises via Oracle Cloud@Customer or in the Oracle Cloud Infrastructure, enables customers to simplify relational database environments and reduce management workloads. Oracle Autonomous Database eliminates the complexity of operating and securing Oracle Database while giving customers the highest levels of performance, scalability, and availability. Oracle Database can be deployed on-premises when customers have data residency and network latency concerns. Customers with applications that are dependent on specific Oracle database versions have complete control over the versions they run and when those versions change.
  • 13
    Oracle Cloud Infrastructure
    Oracle Cloud Infrastructure supports traditional workloads and delivers modern cloud development tools. It is architected to detect and defend against modern threats, so you can innovate more. Combine low cost with high performance to lower your TCO. Oracle Cloud is a Generation 2 enterprise cloud that delivers powerful compute and networking performance and includes a comprehensive portfolio of infrastructure and platform cloud services. Built from the ground up to meet the needs of mission-critical applications, Oracle Cloud supports all legacy workloads while delivering modern cloud development tools, enabling enterprises to bring their past forward as they build their future. Our Generation 2 Cloud is the only one built to run Oracle Autonomous Database, the industry's first and only self-driving database. Oracle Cloud offers a comprehensive cloud computing portfolio, from application development and business analytics to data management, integration, security, AI & blockchain.
  • 14
    PostgreSQL

    PostgreSQL

    PostgreSQL Global Development Group

    PostgreSQL is a powerful, open-source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. There is a wealth of information to be found describing how to install and use PostgreSQL through the official documentation. The open-source community provides many helpful places to become familiar with PostgreSQL, discover how it works, and find career opportunities. Learm more on how to engage with the community. The PostgreSQL Global Development Group has released an update to all supported versions of PostgreSQL, including 15.1, 14.6, 13.9, 12.13, 11.18, and 10.23. This release fixes 25 bugs reported over the last several months. This is the final release of PostgreSQL 10. PostgreSQL 10 will no longer receive security and bug fixes. If you are running PostgreSQL 10 in a production environment, we suggest that you make plans to upgrade.
  • 15
    Hadoop

    Hadoop

    Apache Software Foundation

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the Hadoop PoweredBy wiki page. Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2).
  • 16
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 17
    Kibana

    Kibana

    Elastic

    Kibana is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps. Kibana gives you the freedom to select the way you give shape to your data. With its interactive visualizations, start with one question and see where it leads you. Kibana core ships with the classics: histograms, line graphs, pie charts, sunbursts, and more. And, of course, you can search across all of your documents. Leverage Elastic Maps to explore location data, or get creative and visualize custom layers and vector shapes. Perform advanced time series analysis on your Elasticsearch data with our curated time series UIs. Describe queries, transformations, and visualizations with powerful, easy-to-learn expressions.
  • 18
    VMware Cloud
    Build, run, manage, connect and protect all of your apps on any cloud. The Multi-Cloud solutions from VMware deliver a cloud operating model for all applications. Support your digital business initiatives with the world’s most proven and widely deployed cloud infrastructure. Leverage the same skills you use in the data center, while tapping into the depth and breadth of six global hyperscale public cloud providers and 4,000+ VMware Cloud Provider Partners. With hybrid cloud built on VMware Cloud Foundation, you get consistent infrastructure and operations for new and existing cloud native applications, from data center to cloud to edge. This consistency improves agility and reduces complexity, cost and risk. Build, run and manage modern apps on any cloud, meeting diverse needs with on-premises and public cloud resources. Manage both container-based workloads and traditional VM-based workloads on a single platform.
  • 19
    Azure Data Lake
    Azure Data Lake includes all the capabilities required to make it easy for developers, data scientists, and analysts to store data of any size, shape, and speed, and do all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all of your data while making it faster to get up and running with batch, streaming, and interactive analytics. Azure Data Lake works with existing IT investments for identity, management, and security for simplified data management and governance. It also integrates seamlessly with operational stores and data warehouses so you can extend current data applications. We’ve drawn on the experience of working with enterprise customers and running some of the largest scale processing and analytics in the world for Microsoft businesses like Office 365, Xbox Live, Azure, Windows, Bing, and Skype. Azure Data Lake solves many of the productivity and scalability challenges that prevent you from maximizing the
  • 20
    Swarm

    Swarm

    Docker

    Current versions of Docker include swarm mode for natively managing a cluster of Docker Engines called a swarm. Use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior. Cluster management integrated with Docker Engine: Use the Docker Engine CLI to create a swarm of Docker Engines where you can deploy application services. You don’t need additional orchestration software to create or manage a swarm. Decentralized design: Instead of handling differentiation between node roles at deployment time, the Docker Engine handles any specialization at runtime. You can deploy both kinds of nodes, managers and workers, using the Docker Engine. This means you can build an entire swarm from a single disk image. Declarative service model: Docker Engine uses a declarative approach to let you define the desired state of the various services in your application stack.
  • 21
    Apache Storm

    Apache Storm

    Apache Software Foundation

    Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Apache Storm is simple, can be used with any programming language, and is a lot of fun to use! Apache Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Apache Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. Apache Storm integrates with the queueing and database technologies you already use. An Apache Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.
  • 22
    NVIDIA RAPIDS
    The RAPIDS suite of software libraries, built on CUDA-X AI, gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes. Accelerate your Python data science toolchain with minimal code changes and no new tools to learn. Increase machine learning model accuracy by iterating on models faster and deploying them more frequently.
  • 23
    Apache ZooKeeper

    Apache ZooKeeper

    Apache Corporation

    ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them, which make them brittle in the presence of change and difficult to manage. Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed.
  • 24
    Podman

    Podman

    Containers

    What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Simply put: alias docker=podman. Manage pods, containers, and container images. Supporting docker swarm. We believe that Kubernetes is the defacto standard for composing Pods and for orchestrating containers, making Kubernetes YAML a defacto standard file format. Hence, Podman allows the creation and execution of Pods from a Kubernetes YAML file (see podman-play-kube). Podman can also generate Kubernetes YAML based on a container or Pod (see podman-generate-kube), which allows for an easy transition from a local development environment to a production Kubernetes cluster.
  • 25
    MXNet

    MXNet

    The Apache Software Foundation

    A hybrid front-end seamlessly transitions between Gluon eager imperative mode and symbolic mode to provide both flexibility and speed. Scalable distributed training and performance optimization in research and production is enabled by the dual parameter server and Horovod support. Deep integration into Python and support for Scala, Julia, Clojure, Java, C++, R and Perl. A thriving ecosystem of tools and libraries extends MXNet and enables use-cases in computer vision, NLP, time series and more. Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision-making process have stabilized in a manner consistent with other successful ASF projects. Join the MXNet scientific community to contribute, learn, and get answers to your questions.
  • 26
    IBM InfoSphere Data Architect
    A data design solution that enables you to discover, model, relate, standardize and integrate diverse and distributed data assets throughout the enterprise. IBM InfoSphere® Data Architect is a collaborative enterprise data modeling and design solution that can simplify and accelerate integration design for business intelligence, master data management and service-oriented architecture initiatives. InfoSphere Data Architect enables you to work with users at every step of the data design process, from project management to application design to data design. The tool helps to align processes, services, applications and data architectures. Simple warehouse design, dimensional modeling and change management tasks help reduce development time and give you the tools to design and manage warehouses from an enterprise logical model. Time stamped, column-organized tables offer a better understanding of data assets to help increase efficiency and reduce time to market.
  • 27
    Azure HDInsight
    Run popular open-source frameworks—including Apache Hadoop, Spark, Hive, Kafka, and more—using Azure HDInsight, a customizable, enterprise-grade service for open-source analytics. Effortlessly process massive amounts of data and get all the benefits of the broad open-source project ecosystem with the global scale of Azure. Easily migrate your big data workloads and processing to the cloud. Open-source projects and clusters are easy to spin up quickly without the need to install hardware or manage infrastructure. Big data clusters reduce costs through autoscaling and pricing tiers that allow you to pay for only what you use. Enterprise-grade security and industry-leading compliance with more than 30 certifications helps protect your data. Optimized components for open-source technologies such as Hadoop and Spark keep you up to date.
  • 28
    Azure Databricks
    Unlock insights from all your data and build artificial intelligence (AI) solutions with Azure Databricks, set up your Apache Spark™ environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn. Azure Databricks provides the latest versions of Apache Spark and allows you to seamlessly integrate with open source libraries. Spin up clusters and build quickly in a fully managed Apache Spark environment with the global scale and availability of Azure. Clusters are set up, configured, and fine-tuned to ensure reliability and performance without the need for monitoring. Take advantage of autoscaling and auto-termination to improve total cost of ownership (TCO).
  • 29
    Spark Streaming

    Spark Streaming

    Apache Software Foundation

    Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs. It supports Java, Scala and Python. Spark Streaming recovers both lost work and operator state (e.g. sliding windows) out of the box, without any extra code on your part. By running on Spark, Spark Streaming lets you reuse the same code for batch processing, join streams against historical data, or run ad-hoc queries on stream state. Build powerful interactive applications, not just analytics. Spark Streaming is developed as part of Apache Spark. It thus gets tested and updated with each Spark release. You can run Spark Streaming on Spark's standalone cluster mode or other supported cluster resource managers. It also includes a local run mode for development. In production, Spark Streaming uses ZooKeeper and HDFS for high availability.
  • 30
    SQL

    SQL

    SQL

    SQL is a domain-specific programming language used for accessing, managing, and manipulating relational databases and relational database management systems.
  • 31
    Microsoft System Center Operations Manager (SCOM)
    Operations Manager provides infrastructure monitoring that is flexible and cost-effective, helps ensure the predictable performance and availability of vital applications, and offers comprehensive monitoring for your datacenter and cloud, both private and public.