Sorry to drop this in so late, and I must apologise in advance that I
actually didn't finish reading this thread - it's past my bedtime and I
have been catching up on old emails.
Well, if you can accept that, then perhaps I can start my response as I
would have if it was timely:
Excuse me, but this all sounds mad. I don't want to discourage you from
having a useful conversation and perhaps coming up with something even
more wonderful, but as a sometimes administrator and end-user, and
ambitious developer, hoping to (someday!) use 0install .... I think you
are trying to solve the unsolveable and achieve the unachievable.
And in the process, losing the way and getting overwrought about what is
actually very unimportant to what you can and nearly have achieved.
install-on-demand is never going to be a feature of linux that is used
by dial-up users to run their whole distro. Full stop.
It is EXTREMELY sexy to broadband users.
Linux is already brought almost to its knees by interdependency
resolution problems as it is. Try debian unstable sometime. Please do
not make 0install work like that.
As a longtime dialup user, and someone who has supported remote dialup
and broadband users professionally for some years, I think it is
important not to promise on something that you can't deliver. If you
figure out a way, that is great, but if you expect to achieve dependency
resolution between disparate packages from developers who do not
communicate or even recognise (let alone know) each other, then you set
yourselves up to be very disappointed and 'stuck' if it doesn't work.
And by spending all your effort on that, we all lose the prize that is
nearly achieved already.
Space is cheap and getting astronomically cheaper. Broadband allows
users to download a lot, quickly. These users can establish what
packages are really worth downloading, the dialup users follow their
lead (with some ommissions, and based on a CD-installed system ...
unless they are real geeks ;).
The thing is, whenever I have tried to imagine 0install in use, I have
imagined two scenarios (and no, I still haven't even tried it so you may
already surpass my expectations ;)
The two scenarios I picture are this:
1) developers of packages like RoX distribute each of their apps with
libraries it depends on.
2) distros such as debian (or new distros enabled by 0install)
distribute a whole distro as a series of 0install apps, and they worry
about making sure that libraries are suitable for the apps.
Any 0install package that tries to link on a libray that the developer
is not 100% certain is being made available in the 100% correct version,
is deemed as broken.
The 0install provides the hooks for these people to do their thing, but
doesn't try to fix what would be broken if it was manually installed by
a user, ie linking an app against a glibc it was not compiled with, etc
The user community develops it's own reputation system(s) be it
discussion forums, or some automated thing. Developers who want to tell
the user community that their products should be treated as a single
entity do something like signing each others pgp keys. They should know
each other and know that they can deliver, and work together. If one
screw up, the whole interrellated group loses rep. with the user
community. Full stop.
What I see in this conversation is two extremes, one is I think
unachievable: 100% reliable remote software installation from
untrustable sources, and the other is already supported by a competing
platform (ie completely unreliable but easy installation). We must tread
the middle path. If all 0install manages to do is duplicate windows
update, in terms of reliablity and user-trust, then it is a waste of
time to develop.
But if it never happens like the hurd, no matter how great the
potential, it will fail to fill a great promise.
Ian Clarke has been kind enough to lend us his ear, or at least mouth,
if you want to have secure 0install, then set up a freenet uri
interface, using content-hash keys (forget what the technical term is,
but the freenet scheme where the address of the file is given by the
hash of the file.
Once 0install gets going, it really is buyer beware, the users have to
figure out who they can trust to tell them which sources to sign up for,
all 0install is meant to do imo is facilitate the connection between the
users forefinger and the remote server. Assume that the network can be
trusted to connect to the right destination, but assume no more. Users
who want to make sure it really does reach the right destination can use
whatever encapsulation transport they like, or set up their own private
network with CDROM mounted sources, or whatever.
Don't bite off more than you can chew :)
On Mon, Nov 03, 2003 at 08:35:16PM +0100, Joachim Kupke wrote:
> Thomas Leonard:
> > > Erm... am I mistaken in my assumption that it is WAY more likely for
> > > someone *in between* A and B to actively eavesdrop on the data that A
> > > tries to send to B than it is for A to send rogue software in the first
> > > place?
> > We don't care about eavesdropping, only interception. And it seems to me
> It is customary in cryptography to call those interceptors "(active)
> eavesdroppers". Don't know why exactly that is; but we are actually
> referring to the same thing.
> > that server breakins are far more common that man-in-the-middle attacks (I
> If that were the case, there would be no need to use GPG signatures for
> email, then. Provided, of course, that mail servers on both ends haven't
> got broken into.
> > don't recall ever hearing of someone getting a tojaned version of a piece
> > of software due to this, although doubtless someone has managed it at some
> > time).
> This may be because you are only considering "software" now (which is
> legitimate since we are talking about zero install). But can we really
> trust each and every possible "man in the middle"? Can we really trust
> every proxy server (that may actually serve the index file)? Can we
> really believe that your local ISP is no evil-doer? Can we really
> believe that it would not even be possible for someone to redirect that
> symlink u0i/debian.org/trustworthy.software.org to u0i/evil.com? (The
> latter shows why we should not only consider "software" itself, but also
> every sort of meta-data.)
> > > If Alice wants to make sure that she downloads the same version of
> > > evil.com's software as Bob does, then she must compare (a hash of) her
> > > binary to his, no matter whether you use zero knowledge or offline
> > > signatures.
> > Yes, but she can convice Bob that evil.com sometimes sends out tojans,
> > even if Bob's copy is OK.
> Which means that if Alice is a complete stranger to Bob, he can still
> believe her. If they know each other, he may believe her anyway. If he
> doesn't know her, well, then Alice will have to find someone else who
> got trojaned in order to support her claim. Besides, evil.com may claim
> at any time (say, hourly) that after getting compromised, they have had
> to set up a new key pair, thus ultimately taking GPG signatures to
> absurdness: It's just more complicated to subvert prescribed GPG
> signatures, but your evidence gathering only hurts the innocent.
> What's the big deal anyway? If I set up my web page such that whenever
> it's accessed by your IP, it will show some insulting text, what are you
> going to do about it? Demand that every web page had better be signed so
> you had a permanent proof? No, you will show your findings to a witness
> and report me to authorities (or, of course, just ignore my web site).
> I am so insisting on this because with the web, we seem to have learned
> that putting some fancy flash animation on your web page and at the same
> time requiring your users to install the appropriate plugin will just
> lead to the exclusion of some of your potential readers (or, for that
> matter, customers). Why repeat this mistake for zero install? If you
> insist that my site be GPG signed, you won't (saying this in modesty)
> enjoy its potential benefits. (Actually, I'm considering to finally GPG
> sign it because I may be less unrelenting in this concrete case than it
> Allowing site maintainers to decide for themselves whether they want to
> use permanent or zero knowledge signatures seems like a wiser decision;
> forcing someone to follow some highhanded scheme (where compatibility is
> not the issue) will only lead to trouble. You seem to prefer a
> "cathedral" style zero install to a "bazaar" style one.
> > Not that I care about this particularly; it's just that zero knowledge
> > is much more complicated, and doesn't have any apparent benefits.
> Whew, adding about a dozen lines of C code to the client (one may argue,
> though, that the functionality, like GPG, belonged into a separate
> binary) and installing about two CGI scripts whose functionality amounts
> to about half a dozens lines in python on the server side doesn't seem
> to be too complicated. Granted, the benefits are more subtle than
> apparent. That's why it would be a wise thing to instruct a packager
> about which kind of signature they would rather want to use.
> > > With offline signatures, of course, I could download from your warez
> > > site, threaten to report you to authorities, you could hastily remove
> > > everything and I would still hold proof in my hand. Thus, ironically,
> > > offline signatures facilitate lynch justice because I can blackmail you
> > > with proof that the police cannot get by themselves.
> > Yet again, this is just "I want to break the law and not get caught".
> Yes, your scheme facilitates this: "I want to blackmail you and not get
> caught," precisely.
> > Clearly, someone is going to get burnt in this; either the person whose
> > work was illegally distributed, or the person who made the copies.
> Protecting your work from illegal copies is difficult no matter what.
> Most probably, zero install would never be a platform of choice for
> warez traders (independent of what kind of signature gets used), it's
> just more likely that pirates go through the extra "fuss" of downloading
> from an FTP site (or similar) and unpacking manually.
> > Given this choice, I tend to favour the protocol that favours the person
> > who didn't break the law.
> Well, if two persons are evil-doers, you must define what (or, actually,
> who) is the lesser evil. We cannot do this; even for a herd of lawyers,
> this would not be possible, I think. It misses the point, anyway. I am
> pretty sure that vi may have been used in the past to write threatening
> letters. While we seem to agree that this should not mean that vi was to
> be forbidden, your suggestion is to add a feature that prevents (or, at
> least, makes difficult) writing texts containing offensive phrases. This
> is far less far-fetched than it may sound. As for copyrighted material,
> vi should make copy&paste impossible, of course; who would want to
> support a felon who typewrites texts from protected books, etc.?
> > You're quite free to get a third party to check your site and confirm that
> > the illegal software was quickly removed, but you can't deny that it ever
> > existed.
> This just seems to open up a real mess of legal problems (like, in a big
> project with several site maintainers, who is to blame if something does
> get wrong?).
> Do you have a bulletin board? Is every note on that board signed? Or is
> it more, like, a selected group of people have keys to its window, and
> that is supposed to convince the majority of passers-by of the fact that
> what is behind the window is authentic?
> Isn't it customary at English universities to publish examination
> results using temporary keys for students? (We do it with permanent
> keys, here; these are supposed never to be associated with their owners
> in public.) What if one of the students tries to deceive (thus, they
> "break the law", don't they)? Naturally, they get listed (if at all) by
> their temporary key. After the examination (session) is over, only
> authorities can tell whom to keep an eye on.
> This is (as far as I can tell, as always) a legal principle: If you
> break the law, you get prosecuted and sentenced (by a selected person
> whom they call a judge, not by the mob), but after you have served that
> sentence, you are supposed to be equal to the innocent. (There may be
> severe crimes where in practice, this principle does not hold [like
> Megan's law in the U.S.], but "my" local constitution explicitly demands
> this procedure.)
> Your offline signature scheme preserves proof to an actually
> long-forgotten crime. With non-digital proofs, there are perpetuation
> chambers for pieces of evidence belonging to a trial that has long been
> closed. With your scheme, you are creating a self-perpetuating proof,
> that is literally: an eternally retrievable proof. Retrievable by
> anyone, not just authorities.
> Naturally, this doesn't mean that zero install would be unconstitutional
> under the German constitution. It means that keeping proofs in the way
> that zero install facilitates would be unconstitutional. (This is like
> with the rifle industry where it's not the manufacturer but the one who
> pulls the trigger that gets prosecuted.) But the bottom line is that
> similar to having (not: keeping) porn in your browser's cache because
> you inadvertently stumbled over some web site can bring you to jail (or,
> at least, into trouble), keeping those
> /var/cache/zero-inst/*/.0inst-key* files may not be able to be squared
> with the German constitution.
> What is the rationale? That lawmakers should be more reasonable, better
> educated? Anytime. That we should not care just because GPG is in more
> wide-spread use than are zero knowledge signatures? No way.
> > You have two arguments against the GPG system:
> > 1) It allows injecting old data. This is easily fixed by adding a
> > 'valid-until' date, if it ever became a problem.
> Right, although it means that your clock has to be set properly. In my
> opinion, this adds "more complexity" to the system because it relies on
> NTP/SSL or something because otherwise, your ordinary Linux box (that
> does synchronize with a time server, does it?) would still be prone to
> the described attack. Still, I agree that such an attack is considerably
> less likely with a time stamp. (Anybody who would like to quantify this
> Besides, this scheme is less robust in that you have to know in advance
> when your software gets exploited so you can set expiry dates properly.
> Since naturally you cannot tell, you will have to set default expiry
> times like 24 hours. That means, once in a day you must resign your zero
> install site.
> > 2) You want to avoid being held accountable for your actions. But
> No, I do want to be held accountable for my actions by a competent
> person (or organization or authority or whatever), but not by the mob.
> > > So you want zero install to become an evidence gathering tool rather
> > > than a zero installation tool.
> > This isn't freenet. I don't intend to add features to gather evidence, but
> > neither do I intend to add features to prevent it.
> Summarizingly, your point is: GPG signatures are well-understood,
> heavily used, and the current system just "works just fine, thank you."
> My point is that zero knowledge signatures may even be a more natural
> way of thinking about a signature, that they are of a comparable
> "complexity", and that there are scenarios where with offline
> signatures, you(r users) may end up fighting lost causes, either legally
> or morally.
> > It's not something I'm at all interested in.
> Naturally, you are free not to be interested in a solution to a problem,
> of which you find its existence hard to accept.
> > I think the anonymous nature of the internet already allows what you want.
> > At the moment, by distributing dodgy software, you risk people's trust in
> > your key (but not in you yourself, unless you take extra steps to link
> > yourself to the key). Zero knowledge allows you to distribute dodgy
> > software without even risking your key... but there is reason to want to
> > do this.
> Right, there is no reason to risk one's reputation to save one's key.
> Unless, of course, someone finds their reputation easier to "generate"
> than a new key pair.
> > > You may be knocking on wood, I prefer to be---as you call it---paranoid.
> > I'm not interested in writing code to fix problems that don't exist yet.
> > Maybe in a few years we'll have this level of security at the HTTP layer
> > (so when I say "Get http://foo.com/index" I know that I'm really talking
> > to that server right now and getting the results it sends).
> Of course, one could delegate the whole responsibility to the external
> program, claiming that it was wget's job to make sure it's fetching
> proper files. (You would have to implement some sort of inter-key trust,
> again, of course.)
> > Until then, we only need to implement our own hacks if the problem becomes
> > severe enough that we just can't wait. Currently, the problem is
> As is always the case with cryptography: If you think you are dealing
> with a little severe problem, you can just forget about cryptography
> altogether. Curious, with mail it seems to be a more important problem,
> then because people seem to make darn sure that PGP/GPG worked the way
> it is supposed to.
> Actually, I, too, used to think that the signature problem did not have
> top priority. Until you shut down the possibility *not* to use
> signatures. Hence, you seem to think that enforcing (permanent)
> signatures is important enough to create a problem:
> > non-existant, but we still already implemented the GPG system, which
> > should foil most attacks if anyone actually tried them.
> While the problem you refer to may be non-existent, your users face the
> problem that they can no longer use
> u0i/www-i1.informatik.rwth-aachen.de/nedit.org/NEdit (among others). As
> I said before I am probably going to call the new 0build script sometime
> soon, and I am a bit ill-at-ease about exaggerating the importance of my
> site (which essentially only provides some things that can be downloaded
> elsewhere), but honest: Forcing people to upgrade (as with every sort of
> force) is just as bad as is forcing them to use some sort of signature
> they may be uneasy with. (In fact, I don't feel *forced*; I can just go
> on using my old server with my old clients, of course.)
> > > > On the other hand, zero knowledge is inherently insecure, because you have
> > > > to store the secret key on the server. And not just on the server, but
> > > > readable by a CGI script!
> > >
> > > This is just not true. On the server, you only keep, say, an RSA style
> > > signature (and your public key, but this one---duh!---is public).
> > Maybe 'key' is the wrong word then, but you have to have some kind of
> > secret on the server, whatever system you use. If someone breaks into a
> > zero knowledge server then they can make a copy of its disk and distribute
> > the old index forever.
> Yes. You keep a signature on the server, as with your system. Let's
> compare: In your system, distributing the old index forever can be
> accomplished both by breaking into the server and by intercepting
> traffic. With zero knowledge signatures, this can only be done by
> breaking into the server. Think that your server is likely to be broken
> into? Distribute your signature to two servers, both of which need to be
> contacted by a client in order to verify the authenticity of your index
> file. This is "sharing a secret", another useful protocol in
> cryptography. (This one adds complexity, though.)
> > > Server generates random value r and sends E(r) to the client.
> > How does the server get E(r) if it doesn't have the secret key?
> Public-key cryptography means you can encode data using only a public
> key. In RSA, E(r):=pow(r,e) mod n. Both e and n belong to your site's
> public key.
> > > > Imagine if someone tried to use a zero knowledge script on SourceForge!
> > >
> > > Yes, what's the big deal?
> > Anyone can copy the script and all its inputs. Presumably this would allow
> > them to pretend to be the original site, while actually running the
> > scripts on their own server?
> Yes, in an environment where you publish your signature your signature
> is public. Provided SourceForge doesn't change (that is, protect one
> project's private data from another one's maintainers), a site
> maintainer should be advised not to publish their signatures there
> unless they wanted to employ permanent signatures in the first place.
> For everybody else, zero knowledge signatures seem to be a considerable
> > You think a lack of concrete evidence would stop SCO from sueing you?
> Just my favorable disposition to extreme examples doesn't mean there is
> nothing intermediate. Even in a few centuries, there will probably be
> people around who like to get others into trouble without a concrete
> reason. A phenomenon that I would describe as "intermediate" is
> (probably internationally little renowned) G?nther Frhr. v. Gravenreuth,
> a German lawyer who has become famous of sueing, like, arbitrary targets
> for trademark violations (mostly in the context of web pages).
> Technically, I should add, these were mostly dissuasions, but he may
> have taught us that even these may become pretty expensive. (In fact,
> local law suggests to use the legal path of a dissuasion if you feel
> mistreated: You simply write your personal tormenter a letter asking
> them to stop their evil-doing, so you can both avoid taking to the
> courts. Since you had been as nice as to prevent your legal adversary
> from having to pay big for legal proceedings, they are supposed to
> reimburse your expenses for an attorney.)
> There have been concrete cases where people (or companies) received
> those dissuasions because they used some trademarked n-letter acronym on
> their web page where I cannot tell you what n was exactly, but it was
> like three or four characters. Would I want that trouble just because my
> zero install archives contain such an acronym?
> > > At least, I cannot see an overwhelming percentage of web sites switching
> > > over to https.
> > Only because it's so expensive and complicated (and all those error boxes
> > about invalid signatures scare users).
> There may be several reasons, actually, but yes, its handling is
> unnecessarily complicated. Personally, I feel more scared about users
> who painlessly push the "ignore" button when their browser tells them
> that something nasty is going on. But I'm sure you could have told
> because I am so paranoid.
> > > Well, then you would not lose anything using zero knowledge signatures,
> > > would you?
> > Introducing extra complexity always costs.
> Define "complexity." (My formal job description involves this term, but
> I would still like to ask.) In terms of round trips of communication,
> yes, we have slightly greater complexity. But you wouldn't bother about
> a less efficient wget, would you?
> In terms of maintainability, I don't see my proposition involving much
> more than your typical bug-fix would also involve. Unless you consider
> the server side CGIs a problem. That's why I'm advocating to give site
> maintainers the freedom to decide for themselves.
> > If I google your website and find the word "atomic" somewhere, can I also
> > claim you work on nuclear programs?
> Since this question was interesting enough for this university's legal
> department, we are now forced to add a disclaimer to every web page.
> That's a link to a centrally maintained and legally (somehow) checked
> text that is supposed to relieve university web masters of every legal
> responsibility. You sure you want this for zero install?
> > > As I am no attorney, I cannot swear to this, but local law would be that
> > > it's illegal even to unknowingly use copyrighted material.
> > But you've already admitted to being a Linux user. If someone finds
> > copyrighted code in Linux, surely you cannot be sued because you gave it
> > to a friend?
> No, but regulations are quite strict. You can make up to seven copies of
> copyrighted material, and I think there is also a bound on the "amount
> of information" that you are copying. If that copyright owner manages to
> prove that I have copied Linux more than seven times, I can be sued.
> Although I would think that a court would honor a good-faith attempt to
> withdraw my Linux copies (i.e. call all [but seven] of my friends up and
> tell them to nuke their copies). Sounds like complete nonsense? To me,
> it does. It's the law, though.
> > to a friend? What if such code is found in commercial freely-distributable
> > software... will their users be sued too?
> Before you ask those famous questions, like which nationality is a
> newborn to receive that is given birth to by a mother from country A and
> a father from country B while in a plane from country C and in transit
> through country D and where midwives stem from even more countries...
> I don't know. If Alice steals Bob's shoes and sells them to Charlie,
> though, Charlie must return them to Bob without seeing any money. He is
> free to sue Alice in a civil court, I would think. My, do I sound like
> an attorney? I had wanted to avoid legal trouble, and this is the best
> demonstration of why you must not assume that there is no room for zero
> knowledge signatures.
> > > You seemed to suggest that instead of a host name, an IP address would
> > > be requisite (or something similar).
> > Any address field which routinely contains a long string of random digits
> > and letters won't look suspicious if you use an IP address instead of a
> > domain name.
> You think that if instead of u0i/vim.org,sdwldjsfh/bin/vim, a user is
> told to run u0i/18.104.22.168,euirdfuyt/bin/vim, they would willingly accept
> > > Accordingly, accessing u0i/site.dom could prompt ZeroProgress to present
> > > a dialog window with the supposed link target, out of which some of the
> > > random characters are stripped and supposed to be filled in by the user.
> > > If they succeed, zeroinst creates the link u0i/site.dom -> site.dom,KEY.
> > That means that all the other users have to trust one user. If you're
> That's what the current system looks like: One user does the initial
> contacting, everybody else trust them. You could improve this by forcing
> that initial user to confirm a randomly chosen part of the site's public
> key. Of course, you have better security if you forget about those
> u0i/site.dom symlinks altogether and force your users to input the
> (hashed) public key (although the filer might still support them using a
> procedure like the one I described).
> If you want to protect users from each other, you are back at installing
> symlinks in their home directories. Where else would you want to store a
> per-user trust? What other (more convenient) form of a per-user trust
> than a symlink would you want to use? Actually, if you have envfs (that
> is, where /envfs/self/$VARIABLE is a symlink pointing to the contents of
> $VARIABLE), you can set up links
> /uri/0install/site.dom -> /envfs/self/$HOME/.zeroinst/site.dom
> Security comes first, convenience can always be added later. I still
> don't think that we actually need u0i/site.dom at all if we have
> u0i/site.dom,KEYHASH; still, automating the process of setting up
> (per-user or per-machine) symlinks cannot hurt.
> > doing that, you might as well just check the site key when the site is
> > first accessed (eg, ZeroProgress asks the user to confirm the key for each
> > site). In most cases, though, the user won't have any more reliable way to
> > check the key anyway.
> You can still make it a policy decision (that is, the machine's root
> makes this decision) whether you want to
> * display the whole hashed public key and ask the user to push either
> the "that's ok" or the "oops, let's bail out" button, or to
> * display just a (non-contiguous) part of the hashed public key and ask
> the user to fill in the missing characters and refuse to accept an
> incomplete answer.
> The latter possibility still opens up the problem that the key may stem
> from an insecurely transmitted web site, but at least the user may think
> twice before they commit their undesirable "I'll accept anything so I
> can run the software" answer. Perhaps, I am thinking too much of most
> users, but they may well disbelieve that they are actually supposed to
> copy some characters from a web page where their computer could have
> done that itself. Add a warning in that dialog box (similar to GPG's
> warnings), and we can establish quite a good deal of security.
> > > No, 22 characters is the maximum. That's because we trust in MD5 sums,
> > > anyway, and MD5 sums are 128 bits long. Every random character will
> > > encode 6 bits, that's 22 characters at most.
> > OK. Not sure where I got 30 from. It's still pretty long.
> I had already suggested to let the site maintainer make the decision.
> Some site maintainer may decide that 36 bits of their MD5 hash is
> enough, then you have only 6 extra characters. A very paranoid
> maintainer may think they actually need all the 128 bits, then you end
> up with 22 extra characters. A more reasonable default would be to use
> about 66 bits, so we are looking at 11 extra characters.
> > > With the "u0i/site/.0inst-key-BLAH -> ../site" scheme, u0i/site either
> > > has a valid signature or doesn't.
> > An index can be signed by any number of keys, if required. We probably
> > want to separate out normal users (who aren't going to care about any of
> > this; the risks are too small) and companies working on security critical
> > stuff. They probably wouldn't want to allow any new sites to be accessed.
> I think this sounds like it could be conciled with the above suggestion.
> Trouble was that if you ever are in the rare situation where you
> actually *want* to access two different sites with the same name, you
> cannot have
> u0i/site/.0inst-key-BLAHONE -> ../site and
> u0i/site/.0inst-key-BLAHTWO -> ../site
> You would rather want to have:
> u0i/site -> site,BLAHTWO and
> u0i/site,BLAHONE/ (as a directory, no symlink)
> u0i/site,BLAHTWO/ (as a directory, no symlink)
> where u0i/site points to the site that is more likely to be used.
> I agree that one should avoid giving two sites the same name at all
> cost, but perhaps in an Intranet, you have site == "192.168.0.1", and
> now, two organizations merge or something. You could even think of a
> zero install managed site list (similar to /etc/hosts) that would map
> the entire "site,SOMETHING" to an IP address (say, in case DNS fails).
> In the above situation, you could have site == "ziserver.localnet" and
> you have set that up at two otherwise unrelated locations. Your zero
> install managed site list will take care that everything is routed as
> intended as long as you may need to actually give your zero install
> servers proper names.
> > You can manually check the keys already. Just look in
> > /var/cache/zero-inst/site/.0inst-meta/trusted_key and compare it with
> > their key.
> In Unix, everything is a file (or a file name). There is no reason not
> to make this information available in the path name. A student may write
> their program and link it against u0i/internshipsite,KEY/lib. Then, they
> will (maybe) want to demonstrate this to a friend. Their friend doesn't
> know about internshipsite (nor its key), so they are lucky to find the
> trust information in the path name. You could have designed the choices
> system in such a way that a single huge database file kept every
> possible application's choices (much like the Windows registry). But why
> would you want to? It's much more elegant to reuse the file system in
> order to store so-called meta-data explicitly.
> Besides, I don't care about /var/cache/* most of the time because it is
> just a location that the operating system uses to do caching. Why would
> I want to do a manual check in some strange directory?
> > [services starting in parallel]
> > That's not what I was suggesting at all. You specify the dependancies, and
> > everything starts as soon as it can. There should be no sleeps at all
> > (unless something something actually goes wrong).
> With daemontools, you can still have that if you think it actually
> accelerated anything (which is pretty rare). You put in one service's
> run script a command that will tell the system to go to sleep until
> explicitly roused by another service. You must be prepared for other
> services to fail, nevertheless. Therefore, it's customary for
> daemontools users not to bother too much about dependencies and let the
> system resolve everything at run-time. Though you may find this hard to
> accept, daemontools services are usually (way) faster up and running
> (because contrary to your typical SysV style startup script, they are
> launched in parallel).
> > If you don't know when it has started, you can't run the next step as soon
> > as possible. Eg, zero-install will be ready, but you don't start gdm
> > for a bit. This is slow and inefficient.
> Use the above technique. In the worst case, we are losing (depth of the
> dependency tree) times one second (and for a theoretical average, we are
> losing half of that, while for a typical setup, we are losing virtually
> nothing). How deep are your dependency trees?
> > Of course. But it's very, very rare to run out of memory while booting.
> There may be other reasons why your services go down. They can also go
> down at a different time, not only while booting. Are you sure your
> scripts can resolve all the possible dependency problems when services
> go down?
> > You can't slow down 99.99% of cases with sleeps and multiple-restarts just
> > for that.
> I don't. Besides, sleeping for a second usually means that other
> processes consume your CPU time (meaningfully). daemontools may slow
> down *some* cases (where it really depends on the individual setup) that
> are even negligible in practice. Your typical SysV boot script
> serializes all the daemons that are to be launched, meaning you don't
> even try to optimize anything, you only have the "benefit" that the next
> daemon in the chain is launched absolutely as soon as possible.
> (Besides, I wouldn't really suggest to start messing with Prof.
> Bernstein, since his reputation for complexity issues is kind of
> > > What problem are you trying to solve here? What if the shell that
> > > executes the above gets killed?
> > What if daemon-tools gets killed?
> Look it up in the docs. Every service gets its own `supervise' process
> that is usually and customarily started by a single `svscan' service.
> The latter is started from init (or, on some systems, *is* init). The
> system (usually, there are some incomplete cases) takes care of
> restarting processes within this tree. That gives you the additional
> benefit that "darn important" services are restarted in the moment they
> fail. You don't have that with SysV scripts, and if you compare a single
> lost second or so to the gain in availability (not: stability), that is
> compare this to the scenario where an administrator has to manually
> restart something, your overall gains are worth your costs.
> > Something either has to run as process 1 (can't be OOM killed), or
> > automatically restarted by it if it does. But the slow
> > wait-and-restart bit must happen only in the failure case, not the
> > normal case.
> Why, it does. Plus, it happens if your scripts have been designed to
> ignore the problem. In this case, things will still work (while with
> SysV scripts, everything breaks), though only a second later.
> > > You do not really think that a sleep for a single second slowed anything
> > > down, do you?
> > Even if it was 1 second, it would be too long. But in fact, it's more like
> > one second per-process (in series). It adds quite a bit to the boot time
> > (there's an article at IBM about it somewhere).
> Found that link. "In series" is wrong, though. Delays are at most linear
> in the depth of the dependency tree.
> > > Okay, but then every platform independent piece of software MUST use
> > > appropriate @PLATFORM@ symlinks. I would think that a lot of AppRun
> > > scripts prefer `uname -s`-`uname -m`. You should warn your users about
> > > this issue.
> > The ROX applications tend to do this. It predates Zero Install.
> It is still easy to confuse. There should be a warning about this. Come
> on, a warning can't hurt, can it?
> > [compatibility issues]
> > In both cases, it was extra security checks, not changes to the format,
> > which caused the incompatibility. Remove all the security checks, and old
> > clients will work again.
> Okay. Whatever you call what caused the incompatibility. Compatibility
> is not for free just because you use some technique or another.
> > There's already a link to your site and to ROX. I'll move them to a
> > 'Links' directory, if noone's relying on them...
> Sounds reasonable.
> Public PGP key available; see http://www.kupke.za.net/public-pgp-key
> PGP Fingerprint: 87 F8 F2 46 59 39 CA E4 0D D4 C7 EE 62 04 E2 C9
> This SF.net email is sponsored by: SF.net Giveback Program.
> Does SourceForge.net help you be more productive? Does it
> help you create better code? SHARE THE LOVE, and help us help
> YOU! Click Here: http://sourceforge.net/donate/
> Zero-install-devel mailing list