Showing 40 open source projects for "dxvk-2.0"

View related business solutions
  • Gain insights and build data-powered applications Icon
    Gain insights and build data-powered applications

    Looker is an enterprise platform for BI, data applications, and embedded analytics that helps you explore and share insights in real time.

    Chat with your business data with Looker. More than just a modern business intelligence platform, you can turn to Looker for self-service or governed BI, build your own custom applications with trusted metrics, or even bring Looker modeling to your existing BI environment.
  • Claims Processing solution for healthcare practitioners. Icon
    Claims Processing solution for healthcare practitioners.

    Very easy to use for medical, dental and therapy offices.

    Speedy Claims became the top CMS-1500 Software by providing the best customer service imaginable to our thousands of clients all over America. Medical billing isn't the kind of thing most people get excited about - it is just a tedious task you have to do. But while it will never be a fun task, it doesn't have to be as difficult or time consumimg as it is now. With Speedy Claims CMS-1500 software you can get the job done quickly and easily, allowing you to focus on the things you love about your job, like helping patients. With a simple interface, powerful features to eliminate repetitive work, and unrivaled customer support, it's simply the best HCFA 1500 software available on the market. A powerful built-in error checking helps ensure your HCFA 1500 form is complete and correctly filled out, preventing CMS-1500 claims from being denied.
  • 1

    twitch-batch-downloader

    Automate the download of entire Twitch.tv channels

    Automate the download of entire Twitch.tv channels with its metadata. Save each Twitch video into its own folder, with date and time values, video ID, stream metadata, frame screenshot, .ts parts list and sha256 hash. Keep the original ts files and generate mp4 files from them. It requires a shell and some command line utilities. See README.md for details in the Code/git section.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    Easyspider - Distributed Web Crawler

    Easyspider - Distributed Web Crawler

    Easy Spider is a distributed Perl Web Crawler Project from 2006

    Easy Spider is a distributed Perl Web Crawler Project from 2006. It features code from crawling webpages, distributing it to a server and generating xml files from it. The client site can be any computer (Windows or Linux) and the Server stores all data. Websites that use EasySpider Crawling for Article Writing...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    WebHarvest - web data extraction tool
    Web data extraction (web data mining, web scraping) tool. It leverages well proved XML and text processing techologies in order to easely extract useful data from arbitrary web pages.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 4
    Simple-Scrape is a simple web-scraping library that allows for programmatic access to HTML code. No further techniques are needed and the library is very compact and thus easy to use.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Recruit and Manage your Workforce Icon
    Recruit and Manage your Workforce

    Evolia makes it easier to hire, schedule and track time worked by frontline in medium and large-sized businesses.

    Evolia is a web and mobile platform that connects enterprises with 1000’s of local shift workers and offers free workforce scheduling and time and attendance solutions. Is your business on Evolia?
  • 5

    WebCollector

    WebCollector is an open source web crawler framework based on Java.

    WebCollector is an open source web crawler framework based on Java.It provides some simple interfaces for crawling the Web,you can setup a multi-threaded web crawler in less than 5 minutes. Github: https://github.com/CrawlScript/WebCollector Demo: https://github.com/CrawlScript/WebCollector/blob/master/YahooCrawler.java
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Yet another web crawler? Yes, but this ones uses the full power of regular expressions to accept or reject, examine or ignore, save or refuse pages. You also use MIME types to do all this. Powerful and flexible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7

    Web Crawler Security Tool

    A web crawler oriented to information security.

    Last update on tue mar 26 16:25 UTC 2012 The Web Crawler Security is a python based tool to automatically crawl a web site. It is a web crawler oriented to help in penetration testing tasks. The main task of this tool is to search and list all the links (pages and files) in a web site. The crawler has been completely rewritten in v1.0 bringing a lot of improvements: improved the data visualization, interactive option to download files, increased speed in crawling, exports list of...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    The archive-crawler project is building Heritrix: a flexible, extensible, robust, and scalable web crawler capable of fetching, archiving, and analyzing the full diversity and breadth of internet-accesible content.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 9
    Safe3WVS is one of the most powerful web vulnerability scanner with AI on-the-fly web spider crawling technology,especially web portals ,it is the most fast tool to dig such as sql injection, upload vulnerability, and more.http://www.safe3.com.cn/en
    Leader badge
    Downloads: 14 This Week
    Last Update:
    See Project
  • Eptura Workplace Software Icon
    Eptura Workplace Software

    From desk booking and visitor management, to space planning and office utilization data, Eptura Workplace helps your entire organization work smarter.

    With the world of work changed forever, it’s essential to manage your workplace and assets together to effectively create a high-performing environment. The Eptura experience combines the power of workplace management software with asset management, enabling you to effectively operate your building and facilitate hybrid work.
  • 10
    datalus
    PHP web API designed to simplify object handling(loading, saving, querying, displaying, and editing), abstract the data from its display structure, and layout and allow the target data to be delivered to any supported format without special logic.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    A function-testing, performance-measuring, site-mirroring, web spider that is widely portable and capable of using scenarios to process a wide range of web transactions, including ssl and forms.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 12
    ItSucks
    This project is a java web spider (web crawler) with the ability to download (and resume) files. It is also highly customizable with regular expressions and download templates. All backend functionalities are also available in a separate library.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    crowdspider is a multi-thread web crawler. crowdspider is (just) a web crawler, NOT an indexer. You have to write some code yourself in order to save pages or index them in a database.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Other spiders has a limited link depth, follows links not randomized or are combined with heavy indexing machines. This spider will has not link depth limits, randomize next url, that will be checked for new urls.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    bee-rain is a web crawler that harvest and index file over the network. You can see result by bee-rain website : http://bee-rain.internetcollaboratif.info/
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    elk is a powerful open-source python based command-line web crawler that can recursively search for files and text on websites.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    APC Anti Crawler is a php5 class based on APC which can be used to limit the amount of http request per IP. It stop web crawler to download your entire website.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    The DeDuplicator is an add-on module (plug-in) for the web crawler Heritrix. It offers a means to reduce the amount of duplicate data collected in a series of snapshot crawls.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 19
    This project will provide a tool for users to get a better understanding of the content and structure of an existing website. It will do this by providing a customised web spider as well as extensions to the GUESS graph visualisation application.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    This is simple link checker. It can crawl any site and help to find broken links. It also having download CSV report option.The CSV file includes url ,parent page url and status of page [broken or ok]. It is be very useful for search engine optimization.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    Crawler.NET is a component-based distributed framework for web traversal intended for the .NET platform. It comprises of loosely coupled units each realizing a specific web crawler task. The main design goals are efficiency and flexibility.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    WebNews Crawler is a specific web crawler (spider, fetcher) designed to acquire and clean news articles from RSS and HTML pages. It can do a site specific extraction to extract the actual news content only, filtering out the advertising and other cruft.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Aracnis is a Java based framework for building distributed web spiders. These spiders can be used to accomplish a variety of tasks, for example, screen-scraping and link integrity checking.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    NightCrawler is a multithreaded web spider which uses MIME types to download files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    J-Obey is a Java Library/package, which allows people writing their own crawlers to have a stable Robots.txt parser, if you are writing a web crawler of some sort you can use J-Obey to take out the hassle of writing a Robots.txt parser/intrepreter.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next