From: Patrick E. <Pat...@we...> - 2008-08-01 10:40:17
|
Hi, I'd like to use the exisiting Aperture WebCrawler implementation with a website, that requires httpsauthentication and authorization with username and password. I included the server certificate with System.setProperty("javax.net.ssl.trustStore","pathToCertificate") and specified username and password in the root url. However I always get the following exception: java.io.IOException: Http connection error, response code = 401, url = https://projects.dfki.uni-kl.de/webdav/medico at org .semanticdesktop .aperture.accessor.http.HttpAccessor.get(HttpAccessor.java:192) at org .semanticdesktop .aperture .accessor.http.HttpAccessor.getDataObjectIfModified(HttpAccessor.java: 104) at org .semanticdesktop .aperture.crawler.web.WebCrawler.processQueue(WebCrawler.java:308) at org .semanticdesktop .aperture.crawler.web.WebCrawler.crawlObjects(WebCrawler.java:148) at org .semanticdesktop .aperture.crawler.base.CrawlerBase.crawl(CrawlerBase.java:216) at medico.aperture.WebCrawlerExample.doCrawling(WebCrawlerExample.java:79) at medico.aperture.WebCrawlerExample.main(WebCrawlerExample.java:46) Now how could such connections be realized using the existing WebCrawler? Best regards, Patrick |