Re: [Wfw-discuss] Hi people. Alive?
Status: Planning
Brought to you by:
eries
|
From: Brandon <bl...@ut...> - 2000-11-01 19:34:41
|
> Really. What is to stop it being spoofed massively? Every script kiddie from > here to china will insert his home page as www.slashdot.org Ah. I didn't think that's what you meant by abuse. That is indeed a tough problem. I don't think it has a solution for our usage model, that being that proxies run by random web users transparently upload pages whenever they are browsing them on the web. In a truly decentralized system, there is no central authority to broker this authentication. Of course you could have the web pulishers publish their documents in Freenet. Also, you could have people publish their pages in their own cryptographically signed subspace. So when Bob browses, he publishes his cache and people can select Bob's cache among the many to fetch pages from. In the latter case, you might as well just set up a squid proxy. In the former case, well, that's an interesting project, very different from the original usage model, though. I think the original usage model is still useful. I don't think spoofing will be all that big of a problem. If the number of legitimate users is significantly greater than the number of spoofers, then, statistically, you're going to get the right page most of the time. This means, of course, that you can't trust information you get from Freenet. This is true even if you have a cryptographic subspace because with digital signatures the only knowledge you gain is that you're talking to the same person that you talked to last time. If it's your first time to view a site, no amount of cryptography will help you. Really, this trusting information because it comes from the same DNS name as linked to from somewhere else is rather silly. DNS and IP can be spoofed and on top of that sites can be hacked. Fetching web pages out of Freenet makes it easier and harder to spoof them. If you get there first, you can just run a perl script. If you don't get there first, there's pretty much nothing you can do. > > > Does this project require features only in freenet after 0.n, n > 4 ? > > > > Not at all. It could be implemented today. > Is this still the proposal? You know it won't work. So some details, please? > If you just insert http://www.blah.com/ into Freenet as an SVK, it will not > be updatable. And it is infinitely spoofable. You can get around the first > problem with pseudo-updating (cost quite large). You can't get around the You would use a KSK. SVKs are only useful when you know who the publisher is from previous works. An SVK identifies a group of files to be published by the same person. You would use psuedo-updatability. The cost is not large if you use date-based enumeration and execute requests in parallel. > second problem, in general, although you could define a metadata format to > encapsulate a URL, insert the file separately as a CHK (to avoid > duplication), and return an SVK for the inserted file (metadata), which > would be a pointer to an archived web site. This won't be transparent > though. You could do that transparently, but I don't see why you'd want to use SVKs at all. KSK to CHK redirects already happen transparently. |