I have set up an OSS that has successfully indexed my site. After doing that I added some paths to the Crawler Exclusion list. They seem to work fine since the manual crawler complains when I try to crawl such URL:s. However those URL:s are still in my search results, which I-m guessing is due to the old index still being present. Hence I need to throw out the old index and crawl the site again with fresh results, but I can't find a way to do that. Any ideas?
It seems I managed to rinse out all old pages by using the Delete tab and the query "h*".
Using a wildcard (*) as the first character in the query is not allowed, but since all pages were indexed with their url as id and hence start with "http://", the above found all the current documents/pages.
You can use this query * : *(Without spaces in between them) to delete all the documents.