Menu

Proxy Searcher 2.1 does not run

Burd
2013-04-23
2013-05-03
  • Burd

    Burd - 2013-04-23

    Looks like I again accidentally deleted support request instead approve it. I hate this interface where approve and apply buttons are near each other.

    So, below are request:

    after install and trying to run program 3 errors appear : "Exception has been thrown by the target of an invocation". "Object referencenot set to an instance of an object" and 'object of type: ProxySearchConsole.code.settings.allsettings' is not set yet.
    Thanks

     
  • Burd

    Burd - 2013-04-23

    Thanks for your feedback!

    Did you have a chance to look into this thread https://sourceforge.net/p/proxysearcher/discussion/yourfeedback/thread/0f9b2800/? There is Draft Version of Proxy Searcher 2.2 where (I hope) this defect was fixed.

     
  • Anonymous

    Anonymous - 2013-05-03

    please tell me how it is work???

     
  • Burd

    Burd - 2013-05-03

    Program has a few players.

    First player (and probably most important) is search engine. Currently program support 3 engines: using Google for finding new proxies in format ip:port, also you can provide list of urls where are proxies in the same format or search in files by specific folder.

    Not all proxies which was found are workable in your Internet segment, therefore we need second player - it's proxy checker. There are no common way to check if proxy are workable because it can give access to some resources and restrict other. Therefore I implemented a few proxy checkers what you can find in settings tab. You alway can configure settings as you want for get access to specific resources.

    If proxy are workable then program gets geo ip location by third player -it's built-in geo ip table or external service (depends on settings).

    If proxy checker cannot determine proxy type (it is true for all build-in checkers) then we need 4-th player - proxy type detector. It's simple script which analyze headers what proxy sent to server.

    Google search is implemented in one thread in order to avoid to be blocked by Google, but all checkers are working simultaneously, therefore search is really fast.

    So, answers to your question is following: main program flow works in the same way as human searches and checks proxy, but much faster. You cannot get viruses during search because program downloads content of site without executing scripts and downloading resources. Therefore this program is little bit more if compare with manual search.

    If you are interested in technical implementation you are able to get latest sources of program and investigate it https://sourceforge.net/p/proxysearcher/code/ci/master/tree/ (it's open source:)).

     

    Last edit: Burd 2013-05-03

Anonymous
Anonymous

Add attachments
Cancel





MongoDB Logo MongoDB