Re: [mtoseland@cableinet.co.uk: Re: [Wfw-discuss] Hi people. Alive?]
Status: Planning
Brought to you by:
eries
|
From: Matthew T. <mto...@ca...> - 2000-11-01 17:29:46
|
On Wed, Nov 01, 2000 at 05:17:19PM +0000, Matthew Toseland wrote: > > > On Wed, Nov 01, 2000 at 11:13:26AM -0500, Eric Ries wrote: > > Hey, I'm thrilled to see some discussion finally getting started around here. > > > > > > Not at all. It will be very nice. > > >Really. What is to stop it being spoofed massively? Every script kiddie from > > >here to china will insert his home page as www.slashdot.org > > > > I think this is an important problem, and it probably needs to be addressed > > socially as well as technically. I originally proposed having some kind of > Well, the only technical solution that would really work would be people > using signatures on all their web content. And then you've got the PKI > issues. Quick fix would be domain registrar signs domain's pubkey, but > many domain registrars are not reliable enough for this to mean anything. Of course if people *want* to be cached, we can just ask them to have a /pubkey.asc and a .sign for every file (other than /pubkey.asc and other .sign files; we might want them to include some headers eg expiry date, to avoid expired content issues - this means they have to re-sign frequently). This gives us automatic authentication, at the cost of having to download that one file through HTTP directly. If the whole point is to avoid the slashdot effect, you may not be able to do that. But you don't have to download the .sign files, they can be in Freenet. Definitely not an option for 90% of the web, but maybe something to look at. > > feedback system operate on top of normal Freenet operation. So, for > > instance, if I get a bogus page in my browser, I need to be able to tell > > the WFW system that it's no good. That information needs to propagate back > > along the chain of servers that served this document. They need to just > > purge it from their cache. I figure that we could just watch for people to > > hit "reload" (or shift-reload or whatever) and take that as a negative > > feedback event. It's much better to purge too often than not often enough. > If the document changes when reloaded, it's a negative feedback. If not, > it's OK. Are SVKs verified client side or server side? If client side, there > may already be a suitable un-hit mechanism. > > help too. Computers would have to "earn" their way into the center of the > > network by not sending out bogus responses, and then would be quickly > > pushed out again if they started spamming. > Implies web of trust. Also implies instant detectability of spam. A way to > force servers to send a Content-MD5 field, combined with header prefetch, > would be *so* helpful here. > > > > Does that make any sense? > > > > > > > > > > > > > Does this project require features only in freenet after 0.n, n > 4 ? > > > > > > > > Not at all. It could be implemented today. > > >How? The original freshmeat editorial said basically: > > >In parallel, > > >1. Do the HTTP request > > >2. Request the URL as a key from Freenet. > > >If it comes in through #1, insert it into freenet. > > >Is this still the proposal? You know it won't work. So some details, please? > > >If you just insert http://www.blah.com/ into Freenet as an SVK, it will not > > >be updatable. And it is infinitely spoofable. You can get around the first > > >problem with pseudo-updating (cost quite large). You can't get around the > > >second problem, in general, although you could define a metadata format to > > >encapsulate a URL, insert the file separately as a CHK (to avoid > > >duplication), and return an SVK for the inserted file (metadata), which > > >would be a pointer to an archived web site. This won't be transparent > > >though. > > > > Why would that not be transparent? > Because the above mechanism gives you an SVK, which you cannot get > transparently from the URL after the event. > > > > > > > > > > > > > Does this project want help? > > > > > > > > Yes! It could be implemented pretty easily as a proxy. I've been thinking > > > > lately that you could implement a generalized proxy that would work with > > > > and peer-to-peer network, or a combination of them. That would be way > > > > cool. > > >It should definitely be a proxy. However squid redirectors are not the way > > >to go IMHO, we want to send the HTTP request *in parallel* to the freenet > > >request. But it's not that hard to write a simple threaded proxy. > > > > I think that is correct. It would be nice if it could integrate into one or > > more browsers without users needing to mess with their proxy settings, though. > > > > Eric |