You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(2) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Patrick C. <pco...@ii...> - 2003-09-19 20:11:39
|
Hey, Welcome to the cecid-devel mailing list. This list is primarily for discussion on the development of the CECID client (possibly called FLACCID), or the CECID script itself. Mostly, I'll use this list to keep people informed of matters that are too technical to go onto the newsboard, and lengthy items of news, etc. Please also use this list to post comments/suggestions/criticisms to our development team. Firstly, I would like to welcome two new members to our development team - Chris and George. George is now in charge of developing the PHP script, so address any issues with that to him. Chris is going to be working on the client with me. If you've got any questions, post them to this group. Secondly, I'm just announcing that CVS is up and running - as you can see from our sourceforge page. Visit http://sourceforge.net/cvs/?group_id=84849 to access the web-based interface (which doesn't seem to be working at this moment, but it claims it'll work soon...we'll have to wait and see). You can access the current development version of FLACCID (the CECID client) from CVS as well - the CECID-CLIENT module. Finally, I'd like to publish the prototype protocol we're going to be using for the client - please post comments/obvious problems. My explanation is quite long, so bear with me...here goes: You have 'Nodes' A,B,C, and D. Each is running a copy of the client. Each client listens on port 1337, and also listens for HTTP requests from the local machine on port 8000 (loopback only, for security). Each client, as it starts, contacts a 'Node Discovery Service' (the GWebCache is the kind of thing I'm thinking of - just search for that on google) and downloads a list of 10 other nodes on the network. This function has already been implemented (buildlist and getnodes), if you look at the source. Nodes are stored in an array of 'Node' structs. These nodes allow the client to 'bootstrap' itself onto the network. Every client comes with a list of NodeCaching servers (aka CECID scripts) valid at the time of release, and it updates it's list every time it is run. When a client, say client A, receives a request for data on its local interface, it contacts the first node in its list (Node B), and checks to see if the node is up and running. If it is, it encrypts the HTTP request (it firsts asks B for it's public key and encrypts using RSA), and adds a header containing a unique ID for that node, as well as a 'bounce count' - a random integer between, say, 3 and 7. B receives the encrypted request and decrypts it. It adds A's key and IP address to a 'routing table' database. B decreases the 'bounce count' by one. It then contacts C, encrypts and transmits the request to C. C decrypts the request, and adds B's IP address and A's signature to the routing database. This way, if C receives a packet with A's signature, it forwards the packet to B, who forwards it to A. No-one, except A, can know for sure that A's signature actually points to A's IP - A could just be sending the request on to someone else, for all B knows. In this way, even if B or C are hostile nodes, they cannot link data sent through the network (or node's signatures) back to IP addresses. C decreases the 'bounce count' by one. In this way the data eventually reaches D, who, seeing that the bounce counter is now at zero, formulates the request for data into a proper HTTP request and sends it out onto the web. When it receives as response, it creates a packet of information with A's signature on it, looks the signature up in the routing table (and finds C's IP) and transmits the data to C. C looks the ID up in its table and transmits to B, who transmits to A. The response finally arrives back at the originating node, with none of the intermediate nodes (B,C,or D) having any idea of who originated the request. The only problems I can see with this is the lag - bounce your browser off some proxies using a standard proxy chaining tool and you'll see what I mean. 2 proxies gets quite slow, and by the time you chain 5 proxies together packets start dropping out and the connection gets lost into the ether. Perhaps you know how this aspect could be improved (this is my first attempt at designing something like this, so theres probably quite a few improvements :). Nodes dropping out shouldn't be a problem - we can always make the clients send a 'request for route' packet to all the nodes in their list, searching for a signature-IP address match. Apart from this our other major concern should be security - we have to be very sure that information cannot be traced back to an originating IP address, as this is going to be provided as a total anonymization service. So that concludes this update - I hope we can get some discussion happening about what I've said here. For more info, or clarification on any points, post to this mailing list, and I'll try and answer you as best as I can. ptrck |