Search Results for "jpk data processing" - Page 9

Showing 306 open source projects for "jpk data processing"

View related business solutions
  • Build AI Apps with Gemini 3 on Vertex AI Icon
    Build AI Apps with Gemini 3 on Vertex AI

    Access Google’s most capable multimodal models. Train, test, and deploy AI with 200+ foundation models on one platform.

    Vertex AI gives developers access to Gemini 3—Google’s most advanced reasoning and coding model—plus 200+ foundation models including Claude, Llama, and Gemma. Build generative AI apps with Vertex AI Studio, customize with fine-tuning, and deploy to production with enterprise-grade MLOps. New customers get $300 in free credits.
    Try Vertex AI Free
  • Deploy Apps in Seconds with Cloud Run Icon
    Deploy Apps in Seconds with Cloud Run

    Host and run your applications without the need to manage infrastructure. Scales up from and down to zero automatically.

    Cloud Run is the fastest way to deploy containerized apps. Push your code in Go, Python, Node.js, Java, or any language and Cloud Run builds and deploys it automatically. Get fast autoscaling, pay only when your code runs, and skip the infrastructure headaches. Two million requests free per month. And new customers get $300 in free credit.
    Try Cloud Run Free
  • 1
    MatchZoo

    MatchZoo

    Facilitating the design, comparison and sharing of deep text models

    The goal of MatchZoo is to provide a high-quality codebase for deep text matching research, such as document retrieval, question answering, conversational response ranking, and paraphrase identification. With the unified data processing pipeline, simplified model configuration and automatic hyper-parameters tunning features equipped, MatchZoo is flexible and easy to use. Preprocess your input data in three lines of code, keep track parameters to be passed into the model. Make use of MatchZoo customized loss functions and evaluation metrics. Initialize the model, fine-tune the hyper-parameters. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    An open source framework for LC-MS based proteomics and metabolomics. OpenMS offers data structures and algorithms for the processing of mass spectrometry data. The library is written in C++. Our source code and wiki lives on GitHub (https://github.com/OpenMS/OpenMS).
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    nonechucks

    nonechucks

    Deal with bad samples in your dataset dynamically

    ...Or what if your dataset is a folder full of scanned PDFs that you have to OCRize, and then run a language detector on the resulting text, because you want only the ones that are in English? Or maybe you have an AlternateIndexSampler, and you want to be able to move to dataset[6] after dataset[4] fails while attempting to load! PyTorch's data processing module expects you to rid your dataset of any unwanted or invalid samples before you feed them into its pipeline, and provides no easy way to define a "fallback policy" in case such samples are encountered during dataset iteration.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    InferSent

    InferSent

    InferSent sentence embeddings

    InferSent is a supervised sentence embedding method that learns universal representations from Natural Language Inference data and transfers well to many downstream tasks. It uses a BiLSTM encoder with max-pooling to produce fixed-length sentence vectors that capture semantics beyond bag-of-words statistics. Trained on large NLI datasets, the embeddings generalize across tasks like sentiment analysis, entailment, paraphrase detection, and semantic similarity with simple linear classifiers....
    Downloads: 0 This Week
    Last Update:
    See Project
  • Go from Data Warehouse to Data and AI platform with BigQuery Icon
    Go from Data Warehouse to Data and AI platform with BigQuery

    Build, train, and run ML models with simple SQL. Automate data prep, analysis, and predictions with built-in AI assistance from Gemini.

    BigQuery is more than a data warehouse—it's an autonomous data-to-AI platform. Use familiar SQL to train ML models, run time-series forecasts, and generate AI-powered insights with native Gemini integration. Built-in agents handle data engineering and data science workflows automatically. Get $300 in free credit, query 1 TB, and store 10 GB free monthly.
    Try BigQuery Free
  • 5
    Django Celery

    Django Celery

    Old Celery integration project for Django

    Celery is a simple, flexible, and reliable distributed system to process vast amounts of messages, while providing operations with the tools required to maintain such a system. It’s a task queue with focus on real-time processing, while also supporting task scheduling. Celery has a large and diverse community of users and contributors, you should come join us on IRC or our mailing-list. Celery is Open Source and licensed under the BSD License. A task queue’s input is a unit of work called a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    RoboSat

    RoboSat

    Semantic segmentation on aerial and satellite imagery

    RoboSat is an end-to-end pipeline written in Python 3 for feature extraction from aerial and satellite imagery. Features can be anything visually distinguishable in the imagery for example: buildings, parking lots, roads, or cars.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    LaueTools

    LaueTools

    open source python packages for X-ray MicroLaue Diffraction analysis

    LaueTools is an open-source project for white beam Laue x-ray microdiffraction data analysis including tools in image processing, peaks searching & indexing, crystal structure solving (orientation & strain) and data & grain mapping visualisation. Python 3 Code and new features are now at: https://gitlab.esrf.fr/micha/lauetools
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    lazynlp

    lazynlp

    Library to scrape and clean web pages to create massive datasets

    LazyNLP is a lightweight tool for collecting and curating large-scale text datasets for machine learning and NLP applications with minimal manual effort.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Pipelines

    Pipelines

    An experimental programming language for data flow

    Pipelines is a language and runtime for crafting massively parallel pipelines. Unlike other languages for defining data flow, the Pipeline language requires the implementation of components to be defined separately in the Python scripting language. This allows the details of implementations to be separated from the structure of the pipeline while providing access to thousands of active libraries for machine learning, data analysis, and processing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 10
    Wally

    Wally

    Distributed Stream Processing

    ...Provide high-performance & low-latency data processing. Be portable and deploy easily (i.e., run on-prem or any cloud). Manage in-memory state for the application. Allow applications to scale as needed, even when they are live and up-and-running. The primary API for Wally is written in Pony. Wally applications are written using this Pony API.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Tensorpack

    Tensorpack

    A Neural Net Training Interface on TensorFlow, with focus on speed

    ...Uses TensorFlow in the efficient way with no extra overhead. On common CNNs, it runs training 1.2~5x faster than the equivalent Keras code. Your training can probably gets faster if written with Tensorpack. Scalable data-parallel multi-GPU / distributed training strategy is off-the-shelf to use. Squeeze the best data loading performance of Python with tensorpack.dataflow. Symbolic programming (e.g. tf.data) does not offer the data processing flexibility needed in research. Tensorpack squeezes the most performance out of pure Python with various auto parallelization strategies. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    LabelImg

    LabelImg

    Graphical image annotation tool and label object bounding boxes

    LabelImg is a graphical image annotation tool. It is written in Python and uses Qt for its graphical interface. Annotations are saved as XML files in PASCAL VOC format, the format used by ImageNet. Besides, it also supports YOLO and CreateML formats. Linux/Ubuntu/Mac requires at least Python 2.6 and has been tested with PyQt 4.8. However, Python 3 or above and PyQt5 are strongly recommended. Virtualenv can avoid a lot of the QT / Python version issues. Build and launch using the...
    Downloads: 95 This Week
    Last Update:
    See Project
  • 13
    PDF-Shuffler
    PDF-Shuffler is a small python-gtk application, which helps the user to merge or split pdf documents and rotate, crop and rearrange their pages using an interactive and intuitive graphical interface. It is a frontend for python-pyPdf.
    Downloads: 58 This Week
    Last Update:
    See Project
  • 14
    Exposure

    Exposure

    Learning infinite-resolution image processing with GAN and RL

    ...Moreover, jpg and most pngs assume an sRGB color space, which contains a roughly 1/2.2 Gamma correction, making the data distribution different from training images (which are linear). Exposure is just a prototype (proof-of-concept) of our latest research, and there are definitely a lot of engineering efforts required to make it suitable for a real product.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    cnn-text-classification-tf

    cnn-text-classification-tf

    Convolutional Neural Network for Text Classification in Tensorflow

    The cnn-text-classification-tf repository by Denny Britz is a well-known educational implementation of convolutional neural networks for text classification using TensorFlow, aimed at helping developers and researchers understand how CNNs can be applied to natural language processing tasks. Based loosely on Kim’s influential paper on CNNs for sentence classification, this codebase demonstrates how to preprocess text data, convert words into learned embeddings, and apply multiple convolution filters to extract n-gram features that are then pooled and fed into a classifier. The project includes scripts for training, evaluation, and data handling, making it easy to run experiments on datasets such as movie reviews or other labeled text collections. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    Twitter Intelligence

    Twitter Intelligence

    Twitter Intelligence OSINT project performs tracking and analysis

    ...This project is a Python 3.x application. The package dependencies are in the file requirements.txt. Run that command to install the dependencies. SQLite is used as the database. Tweet data is stored on the Tweet, User, Location, Hashtag, HashtagTweet tables. The database is created automatically. analysis.py performs analysis processing. User, hashtag, and location analyzes are performed. You must write Google Map API Key in setting.py to display Google Maps.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Zhao

    Zhao

    A compilation of "The Princely Party Relationship Network"

    zhao is a repository that consolidates research, data, and insights related to Zhao, which is likely an individual’s research collection, notes, or curated resources on deep learning, AI, or computational topics (name and content context suggest specialized study). The project may include code examples, experiment results, references to academic papers, mathematical notes, and supporting scripts to explore specific ML methods, benchmarks, or theoretical findings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    jsondata

    jsondata

    Modular JSON by trees and branches, pointers and patches

    The 'jsondata' package provides for the modular in-memory processing of JSON data by trees, branches, pointers, and patches. The main interface classes are: - JSONData - Core for RFC7159 based data structures. Provides modular data components. - JSONDataSerializer - Core for RFC7159 based data persistence. Provides modular data serialization. - JSONPointer - RFC6901 for addressing by pointer paths.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    pyhanlp

    pyhanlp

    Chinese participle

    pyhanlp is a Python interface for HanLP (Han Language Processing) that lets you use a mature Java-based NLP toolkit from Python workflows without rebuilding the underlying algorithms. It is commonly used for Chinese-language NLP tasks where you want production-grade tokenization and linguistic analysis, but still want the convenience of Python scripting. The project focuses on making HanLP’s capabilities accessible through a Python-friendly API surface, so you can integrate NLP steps into data pipelines, notebooks, and downstream ML or information-extraction code. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20

    survol

    RDF-based framework monitoring business systems activity

    A Python agent and a web interface aiming to help the analysis and investigation of a legacy application. A set of machines, processes, databases, programs etc ... all communicating with each other, manipulating your data, and whose software architecture has become, with time, complicated, difficult to understand, and undocumented. Data are aggregated with an RDF inference engine, creating a global vision of the business information processing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    pivottablejs

    pivottablejs

    Drag’n’drop Pivot Tables and Charts for Jupyter/IPython Notebook

    PivotTable.js is a Javascript Pivot Table and Pivot Chart library with drag-drop interactivity, and it can now be used with Jupyter/IPython Notebook via the pivottablejs module. I first built PivotTable.js with a plan to build an in-browser data analysis tool, and got as far as one where you could load up a CSV file in the browser for display. Since then, however, the Jupyter project has gathered steam and now provides a browser-based interface to some of the most powerful data processing libraries in the world, so it makes sense to interface with it.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    AI learning

    AI learning

    AiLearning, data analysis plus machine learning practice

    We actively respond to the Research Open Source Initiative (DOCX) . Open source today is not just open source, but datasets, models, tutorials, and experimental records. We are also exploring other categories of open source solutions and protocols. I hope you will understand this initiative, combine this initiative with your own interests, and do what you can. Everyone's tiny contributions, together, are the entire open source ecosystem. We are iBooker, a large open-source community,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Seq2seq Chatbot for Keras

    Seq2seq Chatbot for Keras

    This repository contains a new generative model of chatbot

    This repository contains a new generative model of chatbot based on seq2seq modeling. The trained model available here used a small dataset composed of ~8K pairs of context (the last two utterances of the dialogue up to the current point) and respective response. The data were collected from dialogues of English courses online. This trained model can be fine-tuned using a closed-domain dataset to real-world applications. The canonical seq2seq model became popular in neural machine...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    composight
    Composight is a cross-platform toolkit for 3D-image processing in the domain of composite materials science. It is written in C++ and provides small, problem-specific applications for viewing, filtering and segmentation of volumetric data such as micro-CT scans. The main objective is not to provide yet another complex application for volume data visualization and medical image processing.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25

    dpanalyzer

    postprocessing tool for Project Gutenberg Distributed Proofreaders

    Specialized tool for PostProcessors of books produced by Project Gutenberg Distributed Proofreaders. Parses the markup structure of a project file out of the formatting rounds; reports about the text structure found, and identifies markup errors. Planned future features: generation of normalized dp output by rejoining split paragraphs and moving around footnotes, renumbering of pages; conversion to basic LaTeX and basic HTML markup for further processing.
    Downloads: 1 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.
Try Free →