Showing 17 open source projects for "libodbc.so.2"

View related business solutions
  • Purchasing and invoice automation solution for small to mid market companies. Icon
    Purchasing and invoice automation solution for small to mid market companies.

    Save your team 10s of hours/week with a fully personalized and automated procurement process.

    ProcureDesk is an integrated purchasing and invoicing platform tailored to help small to medium sized businesses streamline their procurement processes. This user-friendly system automates workflows and consolidates purchasing data into a centralized dashboard, allowing companies to control spending and enhance transparency efficiently. Features like automated invoice matching, simple requisition creation, and immediate cash flow insights minimize manual tasks and boost operational efficiency. ProcureDesk is perfect for smaller enterprises leveraging big-business strategies to reduce costs and optimize their purchasing activities. Discover how ProcureDesk can transform your procurement process into a more effective and manageable part of your business.
  • Create state-of-the-art conversational agents with Google AI Icon
    Create state-of-the-art conversational agents with Google AI

    Using Dialogflow, you can provide new and engaging ways for users to interact with your product.

    Dialogflow can analyze multiple types of input from your customers, including text or audio inputs (like from a phone or voice recording). It can also respond to your customers in a couple of ways, either through text or with synthetic speech. Dialogflow CX and ES provide virtual agent services for chatbots and contact centers. If you have a contact center that employs human agents, you can use Agent Assist to help your human agents. Agent Assist provides real-time suggestions for human agents while they are in conversations with end-user customers.
  • 1
    UI.Vision RPA

    UI.Vision RPA

    Open-Source RPA Software (formerly Kantu)

    ... and only Chrome and Firefox extension (and Selenium IDE) that has "👁👁 eyes". A huge benefit of doing visual tests is that you are not just checking one element or two elements at a time, you’re checking a whole section or page in one visual assertion. The visual UI testing and browser automation commands of UI.Vision RPA help web designers and developers to verify and validate the layout of websites and canvas elements.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    Gerapy

    Gerapy

    Distributed Crawler Management Framework Based on Scrapy

    Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Someone who has worked as a crawler with Python may use Scrapy. Scrapy is indeed a very powerful crawler framework. It has high crawling efficiency and good scalability. It is basically a necessary tool for developing crawlers using Python. If you use Scrapy as a crawler, then of course we can use our own host to crawl when crawling, but when the crawl is very large, we can’t...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    Selectolax

    Selectolax

    Python binding to Modest and Lexbor engines

    A fast HTML5 parser with CSS selectors using Modest and Lexbor engines. Selectolax supports two backends: Modest and Lexbor. By default, all examples use the Modest backend. Most of the features between backends are almost identical, but there are still some differences. Currently, the Lexbor backend is in beta and missing some of the features. To use lexbor, just import the parser and use it in the similar way to the HTMLParser.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    crwlr

    crwlr

    Library for Rapid (Web) Crawler and Scraper Development

    This library provides kind of a framework and a lot of ready-to-use, so-called steps, that you can use as building blocks, to build your own crawlers and scrapers with. Before diving into the library, let's have a look at the terms crawling and scraping. For most real-world use cases, those two things go hand in hand, which is why this library helps with and combines both. A (web) crawler is a program that (down)loads documents and follows the links in it to load them as well. A crawler could...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Omnichannel contact center platform for enterprises. Icon
    Omnichannel contact center platform for enterprises.

    For Call centers or BPOs with a very high volume of calls

    Deliver a personalized customer experience with every interaction, across every channel, with uContact, net2phone’s cloud contact center solution.
  • 5

    Rapid Reference

    An extension that allows for hassle-free website citation/referencing.

    Please do not distribute with the goal of selling my program. How to attach to your Chrome/Edge/Brave etc Browser: 1. Download the extension(rapidreference.zip) 2. Extract 3. Go to chrome://extensions if on Chrome, or navigate to your extension management setting in your browser 4. Enable developer mode (usually top right) 5. Add unpacked extension 6. Choose the extracted extension's folder 7. There you go! How to use: 1. Start a session in the panel of the extension...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    GOPA

    GOPA

    GOPA, a spider written in Golang, for Elasticsearch

    GOPA, a spider written in Golang, for Elasticsearch. Lightweight, low footprint, memory requirement should, be 100MB. Easy to deploy, no runtime or dependency required. Easy to use, no programming or script ability needed, out-of-box features. First of all, get it, two opinions: download the pre-built package or compile it yourself. Besides Elasticsearch, Gopa doesn't require any other dependencies, just simply run ./gopa to start the program. It's safety to press ctrl+c to stop the current...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    pyspider

    pyspider

    A powerful Spider(Web Crawler) system in Python

    pyspider is a powerful Spider(Web Crawler) system in Python. Components are connected by message queue. Every component, including message queue, is running in their own process/thread, and replaceable. That means, when process is slow, you can have many instances of processor and make full use of multiple CPUs, or deploy to multiple machines. This architecture makes pyspider really fast. benchmarking. Since pyspider has various components, you can just run pyspider to start a standalone and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Perl Web Scraping Project

    Perl Web Scraping Project

    Perl Web Scraping Project

    ... local database or spreadsheet, for later retrieval or analysis. Web scraping a web page involves fetching it and extracting from it.[1][2] Fetching is the downloading of a page (which a browser does when you view the page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Once fetched, then extraction can take place. The content of a page may be parsed, searched, reformatted, its data copied into a spreadsheet, and so on.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    WebHarvest - web data extraction tool
    Web data extraction (web data mining, web scraping) tool. It leverages well proved XML and text processing techologies in order to easely extract useful data from arbitrary web pages.
    Downloads: 10 This Week
    Last Update:
    See Project
  • Safety Management Platform | SafetyIQ Icon
    Safety Management Platform | SafetyIQ

    Keep your workers safe, no matter where they are

    SafetyIQ is revolutionizing the way businesses approach safety. As a leading provider of comprehensive workplace safety software, we cater to four key areas: Mobile Worker Safety, EHS (Environment, Health, and Safety), Fatigue Management, and Training. Our platform is designed to safeguard your workers, no matter their location or task, ensuring all-around safety compliance. Unlike most safety software providers that only react to incidents or implement proactive measures, SafetyIQ introduces a third pillar to safety management - predictive analytics. We transform the untapped wealth of safety data within your organization into actionable insights to inform safety strategies, mitigating risks even before they aris
  • 10
    phoneutria
    A Java Web crawler: multi-threaded, scalable, with high performance, extensible and polite. It can be used to crawl and index any web or enterprise domain and is configurable through a XML configuration file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11

    VIT Marks Display

    A small program that accesses VIT marks of a specific student

    A small attempt while learning interfacing with the web while learning python to get the marks of a specific valid VIT student using basic web scraping techniques
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    This is an advanced web scraper with user friendly GUI which let the user define rules and web addresses to extract data from one time or periodically and a target database filed that the data should be saved in.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    This project will provide a tool for users to get a better understanding of the content and structure of an existing website. It will do this by providing a customised web spider as well as extensions to the GUESS graph visualisation application.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    J-Obey is a Java Library/package, which allows people writing their own crawlers to have a stable Robots.txt parser, if you are writing a web crawler of some sort you can use J-Obey to take out the hassle of writing a Robots.txt parser/intrepreter.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    A web-spider, based on the availability of URL APIs to most web based databases, mapping web pages to two dimensional FreeMind mind-maps. Mapp.it runs locally like a web application and uses a small footprint CherryPy webserver.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Purpose of SAWS is to facilitate process of web scraping by - 1) providing a pattern specification mechanism on top of normal regular expressions 2) and implementation of common matching algorithm to run specified pattern on given source for any matches.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Webhunter is a distributed, multi-threaded web crawler designed for both general indexing and crawling the web for focused content.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next