VuFind is a library resource discovery portal designed and developed for libraries by libraries. The goal of VuFind is to enable your users to search and browse through all of your library's resources by replacing the traditional OPAC.
Interleave is a business process management application. It enables you to model your business process and make it available online. It's meant to replace processes which currently rely on paper or spreadsheets and it has a good workflow engine.
Desktop Search - Speed up searching your Windows PC and Outlook emails
Copernic Desktop Search helps you search within your computer for documents, files & emails. Download the best Desktop Search today for free. Copernic Desktop Searchallows you to centralize your document, file & email searches in one unique interface. You can search your files & documents on your computer, external and network drives. Increase your search speed and your organization's productivity while reducing the time lost trying to find those documents. Join the largest Desktop Search users with 4,000,000+ users in more than 125 countries.
A collection of Dokuwiki plugins that will enable the user to spatially enable and use the wiki, currently we have: openlayersmap (a map), geotag (ways of geotagging a page)
Versión standalone de TemaTres: servidor de vocabularios controlados
Paquete permite implementar una versión local de TemaTres y experimentar con sus funcionalidades y las capacidades de otras herramientas asociadas. Incluye: Una versión del Learning Resource Exchange (LRE) thesaurus en español, inglés y francés TemaTres Visual Vocabulary configurado para alimnetarse de la versión en español de Learning Resource Exchange (LRE) thesaurus Una instalación de TemaTres en blanco
Search the web for videos, audios, eBooks, torrents and much more
What is WebCrunch? WebCrunch is intended to provide a very powerful web server indexing and search service allowing you to find a file among millions of files located on public servers around the internet. The search engine is powered by a database that holds information about all the files web servers have. The information about the files is gathered by an intelligent web crawler that runs every 2 to 4 days. It keeps the database clean and up-to-date with the previous contents and new entries for each web server address submitted by members.