With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.
You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
Try free now
Context for your AI agents
Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.
Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
A Python library for easy creation and manipulation of Google Earth KML and KMZ placemark files.
Please get your copy from http://pykml.cvs.sourceforge.net/viewvc/pykml/pykml/?view=tar
This plug-in for Google Desktop is a simple web spider (Könguló is Icelandic for spider) that crawls websites you specify, e.g. intranet websites, and dumps them into Google Desktop. You must install Google Desktop prior to installing the plug-in.
Specto is a desktop application that will watch for updates (websites, email, files...) and notify the user. *Take note that we moved to Google Code hosting: http://code.google.com/p/specto *
Run applications fast and securely in a fully managed environment
Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of scalable infrastructure.
Run frontend and backend services, batch jobs, deploy websites and applications, and queue processing workloads without the need to manage infrastructure.
The sitemap_gen.py script analyzes your web server and generates one or more
Sitemap files. These files are XML listings of content you make available on
your web server. The files can then be directly submitted to Google.
This project has been renamed "pycopia", and extended. This is no longer maintained. Pycopia is hosted at Google code hosting. Please go to: http://code.google.com/p/pycopia/
PyQe has moved to Google Sites and github. Main page on Google Sites: http://sites.google.com/site/pythonquickexecute/. Code at github: http://github.com/brakjoller/pyqe/tree/master. See you there!
Started as part of the Google Summer of Code 2005, this tool adjusts security settings on Linux systems, including firewall and SELinux policies. This tool aims to replace the system-config-securitylevel tool from Red Hat and Fedora Core systems.
Collect public data at scale with industry-leading web scraping solutions
See your detailed proxy usage statistics, easily create sub-users, whitelist your IPs, and conveniently manage your account. Do it all in the Oxylabs® dashboard. Save your time and resources with a data collection tool that has a 100% success rate and does all of the heavy-duty data extraction from e-commerce websites and search engines for you. With our provided solutions and the best proxies, focus on data analysis rather than data delivery. We make sure that our IP proxy resources are stable and reliable, so no issues occur during scraping jobs. We continuously work on expanding the current proxy pool to fit every customer's needs. Our clients & customers can reach out to us at any time, and we respond to their urgent needs around the clock. Choose the best proxy service and we’ll provide all the support you need. We want you to excel in scraping jobs, so we share all the know-how we have gathered over the years.
mMAIM's purpose is to make a it easy to monitor and analyze MySQL servers and to easily integrate itself into any environment. It can show Master/Slave sync stats, some efficiency stats, can return statistics from most of the "show" command, and more!
PYIGS is a Python package that provides an interface to Google! Suggest results. It may be run standalone or incorporated as a module.
Currently it supports a basic query, with or without result match listings.
Later it will include a caching server.
This software is a distributed replicated blob server (inspired by the google file system paper http://www.cs.rochester.edu/sosp2003/papers/p125-ghemawat.pdf ). It stores your blobs ( <=> files) on a given number of your servers.
A Python wrapper for the Google web API. Allows you to do Google searches, retrieve pages from the Google cache, and ask Google for spelling suggestions.
Multiple implementations for abstractive text summurization
This repo is built to collect multiple implementations for abstractive approaches to address text summarization
it is built to simply run on google colab , in one notebook so you would only need an internet connection to run these examples without the need to have a powerful machine , so all the code examples would be in a jupyter format , and you don't have to download data to your device as we connect these jupyter notebooks to google drive
Learn vocabulary in any language. Create and edit your own set of flashcards. Filter them and shuffle them. Add hints and mnemonics. Fully configurable. This project has been moved to Google code, and it is now called Flashblack.
Game engine + remake of Atari XL/XE game Robbo.
Level loader is compatible with levels file of GNU Robbo project.
This is NOT the same project as pyrobbo on google code.