Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
I use aperture as part of another application, of which it's a dependent (in linux). I'm having trouble with some PHP memory limits. The script doesn't output any time of console, so I don't know what URL aperture is crawling/scraping. Is there a way to find out with URL aperture is examining when it is running? I've tried "ps aux | grep webcrawler.sh" but it returns nothing.