Showing 138 open source projects for "all-in-one"

View related business solutions
  • Passwordless authentication enables a secure and frictionless experience for your users | Auth0 Icon
    Over two-thirds of people reuse passwords across sites, resulting in an increasingly insecure e-commerce ecosystem. Learn how passwordless can not only mitigate these issues but make the authentication experience delightful. Implement Auth0 in any application in just five minutes
  • Automated RMM Tools | RMM Software Icon
    Automated RMM Tools | RMM Software

    Proactively monitor, manage, and support client networks with ConnectWise Automate

    Out-of-the-box scripts. Around-the-clock monitoring. Unmatched automation capabilities. Start doing more with less and exceed service delivery expectations.
  • 1
    Airbyte

    Airbyte

    Data integration platform for ELT pipelines from APIs, databases

    ... each month for ambitious businesses of all sizes. Enable your data engineering teams to focus on projects that are more valuable to your business. Building and maintaining custom connectors have become 5x easier with Airbyte. With an average response rate of 10 minutes or less and a Customer Satisfaction score of 96/100, our team is ready to support your data integration journey all over the world.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 2
    Astropy

    Astropy

    Repository for the Astropy core package

    The Astropy Project is a community effort to develop a common core package for Astronomy in Python and foster an ecosystem of interoperable astronomy packages. Astropy is a Python library for use in astronomy. Learn Astropy provides a portal to all of the Astropy educational material through a single dynamically searchable web page. It allows you to filter tutorials by keywords, search for filters, and make search queries in tutorials and documentation simultaneously. The Anaconda Python...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    CKAN

    CKAN

    CKAN is an open-source DMS for powering data hubs

    CKAN is the world’s leading open-source data portal platform. CKAN makes it easy to publish, share and work with data. It's a data management system that provides a powerful platform for cataloging, storing and accessing datasets with a rich front-end, full API (for both data and catalog), visualization tools and more.CKAN is used by national and regional government organizations throughout the European Union, the Americas, Asia, and Oceania to power a variety of official and community data...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    data-diff

    data-diff

    Efficiently diff rows across two different databases

    We're excited to announce the launch of a new open-source product, data-diff that makes comparing datasets across databases fast at any scale. data-diff automates data quality checks for data replication and migration. In modern data platforms, data is constantly moving between systems, and at the modern data volume and complexity, systems go out of sync all the time. Until now, there has not been any tooling to ensure that when the data is correctly copied. Replicating data at scale, across...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Dedicated Servers, VPS, Colocation, Domains and SSL, VPN. Icon
    Dedicated Servers, VPS, Colocation, Domains and SSL, VPN.

    For companies or individuals searching for dedicated servers or VPS for their business or projects.

    Our tailored hosting solutions are ideal both for ordinary users and businesses that are searching for reliability and high-quality standards. The main goal for us is fast network speed and uptime of our services. To reach this goal we are cooperating with the best data centers around the globe, specifically Tier 2 and Tier 3, so our users have access to dedicated servers in the United States, Canada, the Netherlands, Poland, and over 14 locations across the globe.
  • 5
    Cookiecutter Data Science

    Cookiecutter Data Science

    Project structure for doing and sharing data science work

    ... about bikeshedding the indentation aesthetics or pedantic formatting standards, ultimately, data science code quality is about correctness and reproducibility. It's no secret that good analyses are often the result of very scattershot and serendipitous explorations. Tentative experiments and rapidly testing approaches that might not work out are all part of the process for getting to the good stuff, and there is no magic bullet to turn data exploration into a simple, linear progression.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    AWS Data Wrangler

    AWS Data Wrangler

    Pandas on AWS, easy integration with Athena, Glue, Redshift, etc.

    ... ETL tasks like load/unload data from Data Lakes, Data Warehouses, and Databases. Convert the column name to be compatible with Amazon Athena and the AWS Glue Catalog. Run a query against AWS CloudWatchLogs Insights and convert the results to Pandas DataFrame. Get QuickSight dashboard ID given a name and fails if there is more than 1 ID associated with this name. List IAM policy assignments in the current Amazon QuickSight account.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Dagster

    Dagster

    An orchestration platform for the development, production

    ... to be used at every stage of the data development lifecycle - local development, unit tests, integration tests, staging environments, all the way up to production. Identify the key assets you need to create using a declarative approach, or you can focus on running basic tasks. Embrace CI/CD best practices from the get-go: build reusable components, spot data quality issues, and flag bugs early.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Luigi

    Luigi

    Python module that helps you build complex pipelines of batch jobs

    Luigi is a Python (3.6, 3.7, 3.8, 3.9 tested) package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization, handling failures, command line integration, and much more. The purpose of Luigi is to address all the plumbing typically associated with long-running batch processes. You want to chain many tasks, automate them, and failures will happen. These tasks can be anything, but are typically long running things like Hadoop...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    PySR

    PySR

    High-Performance Symbolic Regression in Python and Julia

    PySR is an open-source tool for Symbolic Regression: a machine learning task where the goal is to find an interpretable symbolic expression that optimizes some objective. Over a period of several years, PySR has been engineered from the ground up to be (1) as high-performance as possible, (2) as configurable as possible, and (3) easy to use. PySR is developed alongside the Julia library SymbolicRegression.jl, which forms the powerful search engine of PySR. The details of these algorithms...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Secure Access to Sensitive Data in Minutes Icon
    Secure Access to Sensitive Data in Minutes

    Protect your sensitive customer data in the cloud without sacrificing productivity or overloading engineering teams

    Satori created the first DataSecOps platform which streamlines data access by automating access controls, security and regulatory compliance for the modern data infrastructure. The Secure Data Access Service is a universal visibility and control plane which allows you to oversee your data and its usage in real-time while automating access controls. Secure access to the sensitive PII, health and financial data in minutes with Satori. Satori integrates into your environment in minutes, maps all of the organization’s sensitive data, and monitors data flows in real-time across all data stores. Satori enables your organization to replace cumbersome permissions and acts as a policy engine for data access by enforcing access policies, data masking, and initiating off-band access workflows.
  • 10
    Encord Active

    Encord Active

    The toolkit to test, validate, and evaluate your models and surface

    Encord Active is an open-source toolkit to test, validate, and evaluate your models and surface, curate, and prioritize the most valuable data for labeling to supercharge model performance. Encord Active has been designed as a all-in-one open source toolkit for improving your data quality and model performance. Use the intuitive UI to explore your data or access all the functionalities programmatically. Discover errors, outliers, and edge-cases within your data - all in one open source toolkit...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    gusty

    gusty

    Making DAG construction easier

    gusty allows you to control your Airflow DAGs, Task Groups, and Tasks with greater ease. gusty manages collections of tasks, represented as any number of YAML, Python, SQL, Jupyter Notebook, or R Markdown files. A directory of task files is instantly rendered into a DAG by passing a file path to gusty's create_dag function. gusty also manages dependencies (within one DAG) and external dependencies (dependencies on tasks in other DAGs) for each task file you define. All you have to do is provide...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Diffgram

    Diffgram

    Training data (data labeling, annotation, workflow) for all data types

    From ingesting data to exploring it, annotating it, and managing workflows. Diffgram is a single application that will improve your data labeling and bring all aspects of training data under a single roof. Diffgram is world’s first truly open source training data platform that focuses on giving its users an unlimited experience. This is aimed to reduce your data labeling bills and increase your Training Data Quality. Training Data is the art of supervising machines through data. This includes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Tributary

    Tributary

    Streaming reactive and dataflow graphs in Python

    Tributary is a library for constructing dataflow graphs in Python. Unlike many other DAG libraries in Python (airflow, luigi, prefect, dagster, dask, kedro, etc), tributary is not designed with data/etl pipelines or scheduling in mind. Instead, tributary is more similar to libraries like mdf, loman, pyungo, streamz, or pyfunctional, in that it is designed to be used as the implementation for a data model. One such example is the greeks library, which leverages tributary to build data models...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    dbt-re-data

    dbt-re-data

    re_data - fix data issues before your users & CEO would discover them

    re_data is an open-source data reliability framework for the modern data stack. Currently, re_data focuses on observing the dbt project (together with underlaying data warehouse - Postgres, BigQuery, Snowflake, Redshift). Data transformations in re_data are implemented and exposed as models & macros in this dbt package. Gather all relevant outputs about your data in one place using our cloud. Invite your team and debug it easily from there. Go back in time, and see your past metadata. Set up...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    PySyft

    PySyft

    Data science on data without acquiring a copy

    ... first putting it all in one (central) place. The Syft ecosystem seeks to change this system, allowing you to write software which can compute over information you do not own on machines you do not have (total) control over. This not only includes servers in the cloud, but also personal desktops, laptops, mobile phones, websites, and edge devices. Wherever your data wants to live in your ownership, the Syft ecosystem exists to help keep it there while allowing it to be used privately.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    ipychart

    ipychart

    The power of Chart.js with Python

    Create charts with Python in a very similar way to creating charts using Chart.js. The charts created are fully configurable, interactive, and modular and are displayed directly in the output of the cells of your jupyter notebook environment. Charts are fully interactive, you can hover it to display tooltips and select the information you want to see directly from the output cell of your notebook. All the types of charts present in Chart.js are exposed in ipychart. Even complex features...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    odd-collector-gcp

    odd-collector-gcp

    Open-source GCP metadata collector based on ODD Specification

    ODD Collector GCP is a lightweight service which gathers metadata from all your Google Cloud Platform data sources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    odd-collector

    odd-collector

    Open-source metadata collector based on ODD Specification

    ODD Collector is a lightweight service that gathers metadata from all your data sources. Push-client is a provider which sends information directly to the central repository of the Platform. ODDRN (Open Data Discovery Resource Name) is a unique resource name that identifies entities such as data sources, data entities, dataset fields etc. It is used to build lineage and update metadata.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    AutoGluon

    AutoGluon

    AutoGluon: AutoML for Image, Text, and Tabular Data

    AutoGluon enables easy-to-use and easy-to-extend AutoML with a focus on automated stack ensembling, deep learning, and real-world applications spanning image, text, and tabular data. Intended for both ML beginners and experts, AutoGluon enables you to quickly prototype deep learning and classical ML solutions for your raw data with a few lines of code. Automatically utilize state-of-the-art techniques (where appropriate) without expert knowledge. Leverage automatic hyperparameter tuning,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    DeepH-pack

    DeepH-pack

    Deep neural networks for density functional theory Hamiltonian

    DeepH-pack is the official implementation of the DeepH (Deep Hamiltonian) method described in the paper Deep-learning density functional theory Hamiltonian for efficient ab initio electronic-structure calculation and in the Research Briefing. DeepH-pack supports DFT results made by ABACUS, OpenMX, FHI-aims or SIESTA and will support HONPAS.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ydata-profiling

    ydata-profiling

    Create HTML profiling reports from pandas DataFrame objects

    ydata-profiling primary goal is to provide a one-line Exploratory Data Analysis (EDA) experience in a consistent and fast solution. Like pandas df.describe() function, that is so handy, ydata-profiling delivers an extended analysis of a DataFrame while allowing the data analysis to be exported in different formats such as html and json.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Recap

    Recap

    Recap tracks and transform schemas across your whole application

    Recap is a schema language and multi-language toolkit to track and transform schemas across your whole application. Your data passes through web services, databases, message brokers, and object stores. Recap describes these schemas in a single language, regardless of which system your data passes through. Recap schemas can be defined in YAML, TOML, JSON, XML, or any other compatible language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    JILL.py

    JILL.py

    A cross-platform installer for the Julia programming language

    The enhanced Python fork of JILL, Julia Installer for Linux (and every other platform), Light.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Cleanlab

    Cleanlab

    The standard data-centric AI package for data quality and ML

    ... label issues and other data issues, so you can train reliable ML models. All features of cleanlab work with any dataset and any model. Yes, any model: PyTorch, Tensorflow, Keras, JAX, HuggingFace, OpenAI, XGBoost, scikit-learn, etc. If you use a sklearn-compatible classifier, all cleanlab methods work out-of-the-box.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Weights and Biases

    Weights and Biases

    Tool for visualizing and tracking your machine learning experiments

    Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production models. Quickly identify model regressions. Use W&B to visualize results in real time, all in a central dashboard. Focus on the interesting ML. Spend less time manually tracking results in spreadsheets and text files. Capture dataset versions with W&B Artifacts to identify how changing data affects your resulting models. Reproduce any model, with saved code...
    Downloads: 0 This Week
    Last Update:
    See Project