15 Integrations with Kedro

View a list of Kedro integrations and software that integrates with Kedro below. Compare the best Kedro integrations as well as features, ratings, user reviews, and pricing of software that integrates with Kedro. Here are the current Kedro integrations in 2024:

  • 1
    Vertex AI

    Vertex AI

    Google

    Build, deploy, and scale machine learning (ML) models faster, with fully managed ML tools for any use case. Through Vertex AI Workbench, Vertex AI is natively integrated with BigQuery, Dataproc, and Spark. You can use BigQuery ML to create and execute machine learning models in BigQuery using standard SQL queries on existing business intelligence tools and spreadsheets, or you can export datasets from BigQuery directly into Vertex AI Workbench and run your models from there. Use Vertex Data Labeling to generate highly accurate labels for your data collection.
    View Software
    Visit Website
  • 2
    Docker

    Docker

    Docker

    Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development, desktop and cloud. Docker’s comprehensive end-to-end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle. Get a head start on your coding by leveraging Docker images to efficiently develop your own unique applications on Windows and Mac. Create your multi-container application using Docker Compose. Integrate with your favorite tools throughout your development pipeline, Docker works with all development tools you use including VS Code, CircleCI and GitHub. Package applications as portable container images to run in any environment consistently from on-premises Kubernetes to AWS ECS, Azure ACI, Google GKE and more. Leverage Docker Trusted Content, including Docker Official Images and images from Docker Verified Publishers.
    Starting Price: $7 per month
  • 3
    Jupyter Notebook

    Jupyter Notebook

    Project Jupyter

    The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.
  • 4
    Python

    Python

    Python

    The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists. Whether you're new to programming or an experienced developer, it's easy to learn and use Python. Python can be easy to pick up whether you're a first-time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way to writing programs with Python! The community hosts conferences and meetups to collaborate on code, and much more. Python's documentation will help you along the way, and the mailing lists will keep you in touch. The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python's standard library and the community-contributed modules allow for endless possibilities.
    Starting Price: Free
  • 5
    Plotly Dash

    Plotly Dash

    Plotly

    Dash & Dash Enterprise let you build & deploy analytic web apps using Python, R, and Julia. No JavaScript or DevOps required. Through Dash, the world's largest companies elevate AI, ML, and Python analytics to business users at 5% the cost of a full-stack development approach. Deliver apps and dashboards that run advanced analytics: ML, NLP, forecasting, computer vision and more. Work in the languages you love: Python, R, and Julia. Reduce costs by migrating legacy, per-seat licensed software to Dash Enterprise's open-core, unlimited end-user pricing model. Move faster by deploying and updating Dash apps without an IT or DevOps team. Create pixel-perfect dashboards & web apps, without writing any CSS. Scale effortlessly with Kubernetes. Support mission-critical Python applications with high availability.
  • 6
    pandas

    pandas

    pandas

    pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language. Tools for reading and writing data between in-memory data structures and different formats: CSV and text files, Microsoft Excel, SQL databases, and the fast HDF5 format. Intelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form.Aggregating or transforming data with a powerful group by engine allowing split-apply-combine operations on data sets. Time series-functionality: date range generation and frequency conversion, moving window statistics, date shifting and lagging. Even create domain-specific time offsets and join time series without losing data.
  • 7
    Amazon SageMaker
    Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models. Traditional ML development is a complex, expensive, iterative process made even harder because there are no integrated tools for the entire machine learning workflow. You need to stitch together tools and workflows, which is time-consuming and error-prone. SageMaker solves this challenge by providing all of the components used for machine learning in a single toolset so models get to production faster with much less effort and at lower cost. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps. SageMaker Studio gives you complete access, control, and visibility into each step required.
  • 8
    Matplotlib

    Matplotlib

    Matplotlib

    Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python. Matplotlib makes easy things easy and hard things possible. A large number of third party packages extend and build on Matplotlib functionality, including several higher-level plotting interfaces (seaborn, HoloViews, ggplot, ...), and a projection and mapping toolkit (Cartopy).
    Starting Price: Free
  • 9
    Azure Machine Learning
    Accelerate the end-to-end machine learning lifecycle. Empower developers and data scientists with a wide range of productive experiences for building, training, and deploying machine learning models faster. Accelerate time to market and foster team collaboration with industry-leading MLOps—DevOps for machine learning. Innovate on a secure, trusted platform, designed for responsible ML. Productivity for all skill levels, with code-first and drag-and-drop designer, and automated machine learning. Robust MLOps capabilities that integrate with existing DevOps processes and help manage the complete ML lifecycle. Responsible ML capabilities – understand models with interpretability and fairness, protect data with differential privacy and confidential computing, and control the ML lifecycle with audit trials and datasheets. Best-in-class support for open-source frameworks and languages including MLflow, Kubeflow, ONNX, PyTorch, TensorFlow, Python, and R.
  • 10
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 11
    MLflow

    MLflow

    MLflow

    MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. MLflow currently offers four components. Record and query experiments: code, data, config, and results. Package data science code in a format to reproduce runs on any platform. Deploy machine learning models in diverse serving environments. Store, annotate, discover, and manage models in a central repository. The MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files when running your machine learning code and for later visualizing the results. MLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs. An MLflow Project is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. In addition, the Projects component includes an API and command-line tools for running projects.
  • 12
    Dask

    Dask

    Dask

    Dask is open source and freely available. It is developed in coordination with other community projects like NumPy, pandas, and scikit-learn. Dask uses existing Python APIs and data structures to make it easy to switch between NumPy, pandas, scikit-learn to their Dask-powered equivalents. Dask's schedulers scale to thousand-node clusters and its algorithms have been tested on some of the largest supercomputers in the world. But you don't need a massive cluster to get started. Dask ships with schedulers designed for use on personal machines. Many people use Dask today to scale computations on their laptop, using multiple cores for computation and their disk for excess storage. Dask exposes lower-level APIs letting you build custom systems for in-house applications. This helps open source leaders parallelize their own packages and helps business leaders scale custom business logic.
  • 13
    Azure Databricks
    Unlock insights from all your data and build artificial intelligence (AI) solutions with Azure Databricks, set up your Apache Spark™ environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn. Azure Databricks provides the latest versions of Apache Spark and allows you to seamlessly integrate with open source libraries. Spin up clusters and build quickly in a fully managed Apache Spark environment with the global scale and availability of Azure. Clusters are set up, configured, and fine-tuned to ensure reliability and performance without the need for monitoring. Take advantage of autoscaling and auto-termination to improve total cost of ownership (TCO).
  • 14
    Kubeflow

    Kubeflow

    Kubeflow

    The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow. Kubeflow provides a custom TensorFlow training job operator that you can use to train your ML model. In particular, Kubeflow's job operator can handle distributed TensorFlow training jobs. Configure the training controller to use CPUs or GPUs and to suit various cluster sizes. Kubeflow includes services to create and manage interactive Jupyter notebooks. You can customize your notebook deployment and your compute resources to suit your data science needs. Experiment with your workflows locally, then deploy them to a cloud when you're ready.
  • 15
    Apache Airflow

    Apache Airflow

    The Apache Software Foundation

    Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity. Airflow pipelines are defined in Python, allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Easily define your own operators and extend libraries to fit the level of abstraction that suits your environment. Airflow pipelines are lean and explicit. Parametrization is built into its core using the powerful Jinja templating engine. No more command-line or XML black-magic! Use standard Python features to create your workflows, including date time formats for scheduling and loops to dynamically generate tasks. This allows you to maintain full flexibility when building your workflows.
  • Previous
  • You're on page 1
  • Next