Showing 42 open source projects for "large file"

View related business solutions
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    Flowise

    Flowise

    Drag & drop UI to build your customized LLM flow

    Open source UI visual tool to build your customized LLM flow using LangchainJS, written in Node Typescript/Javascript. Conversational agent for a chat model which utilizes chat-specific prompts and buffer memory. Open source is the core of Flowise, and it will always be free for commercial and personal usage. Flowise support different environment variables to configure your instance. You can specify the following variables in the .env file inside the packages/server folder.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 2
    llamafile

    llamafile

    Distribute and run LLMs with a single file

    llamafile lets you distribute and run LLMs with a single file. (announcement blog post). Our goal is to make open LLMs much more accessible to both developers and end users. We're doing that by combining llama.cpp with Cosmopolitan Libc into one framework that collapses all the complexity of LLMs down to a single-file executable (called a "llamafile") that runs locally on most computers, with no installation. The easiest way to try it for yourself is to download our example llamafile...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 3
    H2O LLM Studio

    H2O LLM Studio

    Framework and no-code GUI for fine-tuning LLMs

    Welcome to H2O LLM Studio, a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell. With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    LangGraph Studio

    LangGraph Studio

    Desktop app for prototyping and debugging LangGraph applications

    .... LangGraph Studio requires docker-compose version 2.22.0+ or higher. Please make sure you have Docker installed and running before continuing. When you open LangGraph Studio desktop app for the first time, you need to login via LangSmith. Once you have successfully authenticated, you can choose the LangGraph application folder to use, you can either drag and drop or manually select it in the file picker.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Manage printing in a cost-efficient and eco-friendly way with Gelato. Icon
    Manage printing in a cost-efficient and eco-friendly way with Gelato.

    Gelato offers an extensive catalog of custom products, a zero-inventory business model, and free designing tools—all in one place.

    The world's largest print on demand network with 140+ production partners across 32 countries. Gelato offers end-to-end design, production and logistics for individuals looking to start their own business today!
    Sign up for Free
  • 5
    Alpaca.cpp

    Alpaca.cpp

    Locally run an Instruction-Tuned Chat-Style LLM

    Run a fast ChatGPT-like model locally on your device. This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface. Download the zip file corresponding to your operating system from the latest release. The weights are based on the published fine-tunes from alpaca-lora, converted back into a PyTorch checkpoint...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6

    LightGBM

    Gradient boosting framework based on decision tree algorithms

    LightGBM or Light Gradient Boosting Machine is a high-performance, open source gradient boosting framework based on decision tree algorithms. Compared to other boosting frameworks, LightGBM offers several advantages in terms of speed, efficiency and accuracy. Parallel experiments have shown that LightGBM can attain linear speed-up through multiple machines for training in specific settings, all while consuming less memory. LightGBM supports parallel and GPU learning, and can handle large...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Ludwig AI

    Ludwig AI

    Low-code framework for building custom LLMs, neural networks

    Declarative deep learning framework built for scale and efficiency. Ludwig is a low-code framework for building custom AI models like LLMs and other deep neural networks. Declarative YAML configuration file is all you need to train a state-of-the-art LLM on your data. Support for multi-task and multi-modality learning. Comprehensive config validation detects invalid parameter combinations and prevents runtime failures. Automatic batch size selection, distributed training (DDP, DeepSpeed...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    pycm

    pycm

    Multi-class confusion matrix library in Python

    PyCM is a multi-class confusion matrix library written in Python that supports both input data vectors and direct matrix, and a proper tool for post-classification model evaluation that supports most classes and overall statistics parameters. PyCM is the swiss-army knife of confusion matrices, targeted mainly at data scientists that need a broad array of metrics for predictive models and an accurate evaluation of large variety of classifiers.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    TensorFlow Addons

    TensorFlow Addons

    Useful extra functionality for TensorFlow 2.x maintained by SIG-addons

    TensorFlow Addons is a repository of contributions that conform to well-established API patterns but implement new functionality not available in core TensorFlow. TensorFlow natively supports a large number of operators, layers, metrics, losses, and optimizers. However, in a fast-moving field like ML, there are many interesting new developments that cannot be integrated into core TensorFlow (because their broad applicability is not yet clear, or it is mostly used by a smaller subset...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 10
    MegaParse

    MegaParse

    File Parser optimised for LLM Ingestion with no loss

    MegaParse is a file parser optimized for Large Language Model (LLM) ingestion, ensuring no loss of information. It efficiently parses various document formats, such as PDFs, DOCX, and PPTX, converting them into formats ideal for processing by LLMs. This tool is essential for applications that require accurate and comprehensive data extraction from diverse document types.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    rwkv.cpp

    rwkv.cpp

    INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model

    Besides the usual FP32, it supports FP16, quantized INT4, INT5 and INT8 inference. This project is focused on CPU, but cuBLAS is also supported. RWKV is a novel large language model architecture, with the largest model in the family having 14B parameters. In contrast to Transformer with O(n^2) attention, RWKV requires only state from the previous step to calculate logits. This makes RWKV very CPU-friendly on large context lengths.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    OpenBB

    OpenBB

    Investment Research for Everyone, Everywhere

    Customize and speed up your analysis, bring your own data, and create instant reports to gain a competitive edge. Whether it’s a CSV file, a private endpoint, an RSS feed, or even embed an SEC filing directly. Chat with financial data using large language models. Don’t waste time reading, create summaries in seconds and ask how that impacts investments. Create your dashboard with your favorite widgets. Create charts directly from raw data in seconds. Create charts directly from raw data...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    mindflow

    mindflow

    AI-powered CLI git wrapper, boilerplate code generator, chat history

    ... to have special access to the API. If you have access, you can run mf config and select GPT 4. If you don't have access, you'll get an error message. Interact with chatGPT directly just like on the chatGPT website. We also have chat persistence, so it will remember the previous chat messages. You can provide single or multi-file context to chatGPT by passing in any number of files as a separate argument in the mf chat call.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    FlubuCore

    FlubuCore

    A cross platform build and deployment automation system

    ..., and IntelliSense make even the most complex script creation a breeze. Large number of often used built-in tasks like e.g. versioning, running tests, creating deployment packages, publishing NuGet packages, docker tasks, git tasts, sql tasks, npm tasks, executing PowerShell, managing IIS scripts and many more.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    MMClassification

    MMClassification

    OpenMMLab Image Classification Toolbox and Benchmark

    ... or add new features, as well as users who give valuable feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to re-implement existing methods and develop their own new classifiers. MMClassification mainly uses python files as configs. The design of our configuration file system integrates modularity and inheritance, facilitating users to conduct various experiments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    ChatGPT-bot

    ChatGPT-bot

    Run your own GPTChat Telegram bot, with a single command

    Go CLI to fuels a Telegram bot that lets you interact with ChatGPT, a large language model trained by OpenAI.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    cerche

    cerche

    Experimental search engine for conversational AI such as parl.ai

    This is an experimental search engine for conversational AI such as parl.ai, large language models such as OpenAI GPT3, and humans (maybe).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Programming Without Coding Technology

    Programming Without Coding Technology

    Create software without writing a single line of code

    PWCT is not a Wizard for creating your application in 1 2 3 steps. PWCT is a general-purpose visual programming language designed for novice and expert programmers. A novice programmer can use PWCT to learn programming concepts like Data Structures, Control Structures and Programming Paradigms. An expert programmer can use PWCT to create any large and complex software. Using PWCT we developed a textual programming language Compiler and Virtual Machine without writing a single line...
    Leader badge
    Downloads: 2,154 This Week
    Last Update:
    See Project
  • 20
    Super PDF Editor

    Super PDF Editor

    Create, Edit, Delete, Organize , Convert, Export, Secure & Sign PDF.

    ...: Combine multiple PDF pages into a single document. Reduce PDF Size: Optimize PDF file size by compressing images and removing unnecessary elements. Duplicate Page: Create duplicates of existing pages within a PDF. Move Page: Rearrange the order of pages within a PDF. Printing: Print PDF files directly from the software. Clone Page: Make identical copies of a page within the same document. Compress Page: Further, reduce the size of individual pages for efficient storage and sharing.
    Leader badge
    Downloads: 22 This Week
    Last Update:
    See Project
  • 21

    WhisperBatchRun

    This batch file will run OpenAi's whisper to transcribe (or translate)

    Currently, Version #1 of the batch file does the following: (1) checks of whisper is installed, and if so, starts to run; (2) Asks if you want to process sub-folders, and if an answer is not provided in 10 seconds, defaults to "N"; (3) Applies the following command to each mp3 or wav file in the folder/sub-folders: "whisper "FILENAME" --model large-v2 --output_format vtt" (4) Creates a log file in the active directory, but only if there were any errors; (5) ends. To use, simply place...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22

    entity-metadata

    Lists of people, churches, and other entities

    Here are lists of entities, such as people, businesses, and churches. These are large files related to this repository https://github.com/az0/entity-metadata
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    StudioGAN

    StudioGAN

    StudioGAN is a Pytorch library providing implementations of networks

    ...-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U). StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives. Each modularized option is managed through a configuration system that works through a YAML file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    TensorFlow Backend for ONNX

    TensorFlow Backend for ONNX

    Tensorflow Backend for ONNX

    Open Neural Network Exchange (ONNX) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. TensorFlow Backend for ONNX makes it possible to use ONNX models as input for TensorFlow. The ONNX model is first converted to a TensorFlow model and then delegated for execution on TensorFlow to produce the output. This is one of the two TensorFlow converter projects which serve different...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next