Tool to automatically fire regular security scans with Nessus. Compare results of the current scan with the previous scan and report on the delta in a web interface. Main objective of the tool is to make repeated scans more efficient. Not affiliated
Run Windows applications on any computer.
WineBOX is an Open Source implementation of the Windows API and a program loader, allowing many unmodified Windows binaries to run on x86-based computers with out need a operative system.
A zero install WAMP webserver
Farming and beekeeping,fruit-growing, fertilizer, plant protection,agriculture advises, articles, forum, blog, agro advertisements,market-gardening
The Best htpd Server....
Here are many materials related to my lectures. Electrical Network Analysis, DSP, Programming, Optical Fiber Systems.
Our mission - to provide you with portable applications, or "portable apps" -- applications that you can carry around on your USB drive, iPod, and other devices. You carry all your information, settings, and programs with you!
REX: Remote EXecution Distributed Computing Services for Linux and Solaris, providing C and C++ APIs, librex library and "rexd" daemon software to implement Load Balancing Process Migration : Dump + Restore, Remote File and Resource Management.
Create perl/tk GUI for your programs.
Simple command to RUN: $perl ZooZ.pl, and go create gui.
Source code for perl simple text editor.
Simple text editor. Run: $perl 01text0.pl
Perl Web Scraping Project
Web scraping (web harvesting or web data extraction) is data scraping used for extracting data from websites. Web scraping software may access the World Wide Web directly using the Hypertext Transfer Protocol, or through a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying, in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis. Web scraping a web page involves fetching it and extracting from it. Fetching is the downloading of a page (which a browser does when you view the page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Once fetched, then extraction can take place. The content of a page may be parsed, searched, reformatted, its data copied into a spreadsheet, and so on.
Collection of several utilities to work with android backups
Project name: android-backup-toolkit URL: https://sourceforge.net/p/android-backup-toolkit/ License: various (see individual projects) Authors: various (see individual projects) This project is simply a compilation of the six following projects, both latest source code and binaries: * android-backup-extractor: https://sourceforge.net/projects/adbextractor/ * android-backup-splitter: https://sourceforge.net/p/adb-split/ * android-timestamp-keeper: https://sourceforge.net/projects/androidtimestampkeeper/ * helium-backup-extractor: https://sourceforge.net/p/heliumbackupextractor/ * no-adb-backup-app-lister: https://sourceforge.net/projects/no-adb-backup-app-lister/ * tar-binary-splitter: https://sourceforge.net/projects/tar-binary-splitter/ All the android utilities are bundled in a single package.