You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
(5) |
Mar
|
Apr
(2) |
May
(3) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2002 |
Jan
(2) |
Feb
(5) |
Mar
(14) |
Apr
(1) |
May
(7) |
Jun
(2) |
Jul
(7) |
Aug
(13) |
Sep
(21) |
Oct
(3) |
Nov
|
Dec
|
2003 |
Jan
(6) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(5) |
Nov
|
Dec
(3) |
2004 |
Jan
(11) |
Feb
(2) |
Mar
(4) |
Apr
|
May
(9) |
Jun
|
Jul
(1) |
Aug
(12) |
Sep
(6) |
Oct
(7) |
Nov
(10) |
Dec
(3) |
2005 |
Jan
(22) |
Feb
(20) |
Mar
(5) |
Apr
(10) |
May
(15) |
Jun
(14) |
Jul
(9) |
Aug
(3) |
Sep
(7) |
Oct
(1) |
Nov
(3) |
Dec
(12) |
2006 |
Jan
(1) |
Feb
(1) |
Mar
(5) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
(2) |
Dec
(2) |
2007 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Roland G. <rol...@me...> - 2002-05-02 00:49:13
|
I look after two apt-proxy servers - one at work, the other at home. The home system is connected via a 56K modem while work has a much better Internet connection :-) Periodically I burn a CD of the .deb files in apt-proxy directory on the work system and copy the files into the apt-proxy directory at home. Apt-proxy seams to cope with this and when I upgrade the home systems, it performs the rsync which sends and receives only a couple of hundred bytes to confirm that the file is up-to-date. Still much better than performing an rsync with a previous version. Unfortunately, when I upgrade another system at home, apt-proxy performs the same rsync over and over again. I'd really like to avoid the additional traffic - any suggestions? Today I discovered apt-proxy-import and will give it a try tonight to see if it avoids these extra rsyncs. Cheers, Roland. -- Tell me and I'll forget; show me and I may remember; involve me and I'll understand - Chinese Proverb. |
From: Chris H. <chr...@gm...> - 2002-04-04 14:48:02
|
Manuel has uploaded 1.3.0 into unstable!=20 Thanks to everyone who contributed to testing this version. I hope that it will be able to get into Woody before release. Chris -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Format: 1.7 Date: Thu, 4 Apr 2002 14:06:05 +0200 Source: apt-proxy Binary: apt-proxy Architecture: source all Version: 1.3.0 Distribution: unstable Urgency: low Maintainer: Chris Halls <chr...@gm...> Changed-By: Chris Halls <chr...@gm...> Description: apt-proxy - Debian archive proxy and partial mirror builder Closes: 83199 94226 140348 140826 Changes: apt-proxy (1.3.0) unstable; urgency=3Dlow . * Release new version into Debian archive. - Add HTTP/FTP backend support using wget - Add apt-proxy-import, a script to import .debs into the cache - Improve reliability when several clients are active - Add FAQ section to README See the changelogs for 1.2.9.x (below) for details. (closes: #83199, #94226, #140348, #140826) Files: d549cbfdd037b76a01c7c10bc65ac79c 595 admin extra apt-proxy_1.3.0.dsc f13e7e9c04ec4395a1f7259f12d8891c 37949 admin extra apt-proxy_1.3.0.tar.gz 802065ad51f1827e61e0217337288d1c 30192 admin extra apt-proxy_1.3.0_all.deb -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org iD4DBQE8rFh+uTFz5I56lGcRAr+wAJ441CYXk9LmzF1kLsMQEi+igCNFKgCXalVJ tryjQsvUDR3wZTQqyvT51g=3D=3D =3D+pDg -----END PGP SIGNATURE----- |
From: Chris H. <chr...@gm...> - 2002-03-27 15:43:57
|
Hi folks In apt-proxy 1.2.9.5 and .6 I've been shaking out the remaining bugs. apt-proxy 1.2.2 hits Woody in 5 days and after that I'd like to upload the development version as 1.3.0 to Debian unstable. Is anyone on this list aware of any issues which need attention before this goes into unstable? I don't know yet whether 1.3.0 will get into Woody before release - that depends on how stable it proves to be once sent into the wild. I was still seeing occasional file corruption messages (apt reports a gzip error during Package files update), but I can't reproduce them often enough to be sure that the problem is fixed in the newest version. =20 I would be very grateful if people seeing any error messages would provide me with: - The type of backend (http/ftp/rsync) - How many clients were using apt-proxy at the time - If possible, an excerpt from apt-proxy.log Thanks! Chris [for apt-proxy development version add this to your sources.list:] deb http://apt-proxy.sourceforge.net/apt-proxy unstable/ --=20 Chris Halls | Frankfurt, Germany ** NEW EMAIL:chr...@gm... ** |
From: Chris H. <chr...@gm...> - 2002-03-19 12:07:28
|
Sorry, I didn't sign that. Chris On Tue, Mar 19, 2002 at 11:13:20AM +0100, Chris Halls wrote: > Hi guys >=20 > My ISP has shut down my email address, chr...@ni.... >=20 > I'm now reachable at chr...@gm... >=20 > Chris --=20 Chris Halls | Frankfurt, Germany ** NEW EMAIL:chr...@gm... ** Yahoo:hagga12000 ICQ:36940860 MSN:chr...@ni... |
From: Chris H. <chr...@gm...> - 2002-03-19 10:13:29
|
Hi guys My ISP has shut down my email address, chr...@ni.... I'm now reachable at chr...@gm... Chris -- Chris Halls | Frankfurt, Germany ** NEW EMAIL:chr...@gm... ** Yahoo:hagga12000 ICQ:36940860 MSN:chr...@ni... |
From: Jason B. <ja...@ed...> - 2002-03-17 08:04:24
|
I'm using Net::Server, and it's pretty nice so far. I'm enjoying it a lot more than using POE. I've ported over my cached copy response code and got pipelining working nicely. It's a lot faster than without it. Doing pipelining while talking to Debian servers is another matter and I don't think I'm going to dabble in trying that until after I get request concurrency race issues for uncached files delt with and the mirror fetching code itself ported. Of course Net::Server isn't in Debian, yet, but dh-make-perl fixes that easily enough, I think, at least for me. I'd rather use a frame work that works than struggle along with only those tools that are available in Debian. I like both hands untied. Memory usage is around 4800K for the parent process. I don't know that I can get it much lower than that, since I have no modules left I can throw away without getting sore. I guess AppConfig could go, but I rather like it. It is pretty complex, though. (And on examination AppConfig seems to use AutoLoader tricks to limit loading of unused routines, so chucking that won't buy much.) |
From: Jason B. <ja...@ed...> - 2002-03-14 00:13:54
|
http://seamons.com/net_server.html But it's not in Debian presently. It does the things NetServer::Generic does, plus a bunch of other interesting things, it is setup to handle a config file, it'll handle daemonization stuff, and so forth. Sounds pretty cool. Anything rewritten now likely wouldn't make Woody anyway, so as long as it were packaged for the next stable release after Woody, whatever that'll be, I'd think it would be okay. And people playing with unstable or testing would then have access to it, and you could always compile the source package to use it on Woody. Thoughts? |
From: Jason B. <ja...@ed...> - 2002-03-13 03:45:34
|
On Tuesday 12 March 2002 04:02 am, you wrote: <snip> > I'm no expert, but looking at the manpage for sysread I can't see how you > can say 'pull in this amount of input until a newline or you read a maximum > length' with sysread. Is it possible to specify nonblocking behaviour, so > that it would return immediately even if the client hadn't sent LENGTH > bytes? I modified the example to use sysread() instead. Of course the echo server is a line based server, so using sysread() breaks the input up, ignoring line and word boundries as it goes. But it seems happy to pull in an amount <= whatever you specify. So I tested by pulling in 9 bytes and it happily accepted less than that and looped when there was more data than that. The forked client will happily sit around and wait for data forever, though, so using sysread won't be that useful for playing with data coming from a client requesting a file. I guess it won't be useful for preventing attacks from malicious servers, either, since you can't rely on the Content-Length returned from the server... I guess if the bug was that easy to solve, the author would've simply noted that you need to use sysread() instead of <STDIN>. My concern is actually that using a while() construct on <STDIN> and having it read until a newline, which is the default behavior, will result in pieces of varying sizes being slurped into memory from a debian mirror sending binary data, since who knows when a 'newline' will 'show up' in a pile of bits. That's why I was thinking of sysread(). At least, then we could sysread() in chunks of a sane size up to Content-Length, which we'd know to be correct. (Assume we aren't talking to a malicious server, of course.) Excuse my clueless posts. I've been sick this week. > Chris > |
From: Chris H. <chr...@ni...> - 2002-03-12 09:02:37
|
On Mon, Mar 11, 2002 at 07:03:56PM -0500, Jason Boxman wrote: > I noticed this in the docs, and I don't know how difficult it would be to > solve it: > > "Bug the first: > > NetServer::Generic attempts to make it easy to write a > server by letting the programmer concentrate on reading > from STDIN and writing to STDOUT. However, this form of > i/o is line oriented. NetServer::Generic relies on the > buffering and i/o capabilities provided by Perl and > IO::Socket respectively. It doesn't buffer its own input." > > The examples show something like: > > while (defined ($tmp = <STDIN>)) { } > > Couldn't you do a sysread() instead so that you could pull in blocks of input > from the client (apt-get) and the server (debian mirror) where STDIN will > play a role? I'm no expert, but looking at the manpage for sysread I can't see how you can say 'pull in this amount of input until a newline or you read a maximum length' with sysread. Is it possible to specify nonblocking behaviour, so that it would return immediately even if the client hadn't sent LENGTH bytes? Chris |
From: Jason B. <ja...@ed...> - 2002-03-12 00:24:06
|
The echo server is a cool little problem, which is in the docs. When I run it and play around, I find the total memory usage hovers around 3792K with 1420K shared. The example uses the fork() mode and each child has around 90% shared. perldoc NetServer::Generic to see the example. The Debian package is libnetserver-generic-perl and it's in Woody, at least, though I don't think Potato has it. It's version 1.x. |
From: Jason B. <ja...@ed...> - 2002-03-12 00:04:01
|
I noticed this in the docs, and I don't know how difficult it would be to solve it: "Bug the first: NetServer::Generic attempts to make it easy to write a server by letting the programmer concentrate on reading from STDIN and writing to STDOUT. However, this form of i/o is line oriented. NetServer::Generic relies on the buffering and i/o capabilities provided by Perl and IO::Socket respectively. It doesn't buffer its own input." The examples show something like: while (defined ($tmp = <STDIN>)) { } Couldn't you do a sysread() instead so that you could pull in blocks of input from the client (apt-get) and the server (debian mirror) where STDIN will play a role? |
From: Jason B. <ja...@ed...> - 2002-03-11 02:43:51
|
On Tuesday 05 March 2002 01:30 pm, you wrote: <snip> > I got hold of Jason today on IRC and we discussed the whole thing. He's > joined this list so we can discuss here. > > It turns out that there are some issues with aptcached that could cause > problems: > > - aptcached depends on POE. It was so Jason could avoid having to do > events loops, but POE is still oficially alpha, is not in Potato and has > a large runtime overhead, Jason reckoned around 2MB. I'm not against > using libraries to help simplify coding and make it more maintainable, > but I think that POE is too big a price to pay. Jason understands that > and is open to dropping POE, as long as someone else does the rewrite > :-) I talked to Chris again last Thursday or Friday on IRC and we started looking for POE alternatives, which I'm very much open to. The usage of POE brings up memory requirements to somewhere around 6MB, which is quite large. I've read through the docs of NetServer::Generic and it looks very promising. It supports multiple modes of operation, including select and fork, and a client mode which could replace my client side POE stuff that actually talks to the HTTP mirrors. > - It doesn't mirror the directory structure. That certainly simplifies > the code but has scalability problems and you don't end up with a partial > mirror. I can see the benefits to both a partial mirror and (of course) scalability. What's the best way to go about maintaining a directory stucture? Should the daemon handle it or could a script be fashioned to deal with reading the package lists and figuring out where things should go, freeing the daemon from having to deal with rather large package lists. > - HTTP 1.1 keep-alives aren't working (yet). > > So it looks like a fair amount must be rewritten anyway. Jason said he's > open to help with whatever we decide to do, which I'm very grateful for. Yes. > If you have the time to work on a rewrite that's great by me - I'd rather > have several people involved than just me. It helps keep bugs shallow and > for the program to grow faster. The only thing I would ask for is to make > sure others can work with the code too. In particular, documenting what > code is supposed to do and helping to make sure it is production-ready. > <snip> > Thanks, > Chris (who's glad things are picking up) Well, at least they were last week. :) |
From: Manuel E. S. <ra...@de...> - 2002-03-07 09:28:45
|
On Tue, Mar 05, 2002 at 07:30:44PM +0100, Chris Halls wrote: > Hi Manuel, sorry I took a while to get back to you. > [snip] > > I got hold of Jason today on IRC and we discussed the whole thing. He's > joined this list so we can discuss here. > > It turns out that there are some issues with aptcached that could cause > problems: > > - aptcached depends on POE. It was so Jason could avoid having to do > events loops, but POE is still oficially alpha, is not in Potato and has > a large runtime overhead, Jason reckoned around 2MB. I'm not against > using libraries to help simplify coding and make it more maintainable, > but I think that POE is too big a price to pay. Jason understands that > and is open to dropping POE, as long as someone else does the rewrite :-) > > - It doesn't mirror the directory structure. That certainly simplifies the > code but has scalability problems and you don't end up with a partial > mirror. > > - HTTP 1.1 keep-alives aren't working (yet). > > So it looks like a fair amount must be rewritten anyway. Jason said he's > open to help with whatever we decide to do, which I'm very grateful for. > > If you have the time to work on a rewrite that's great by me - I'd rather > have several people involved than just me. It helps keep bugs shallow and > for the program to grow faster. The only thing I would ask for is to make > sure others can work with the code too. In particular, documenting what > code is supposed to do and helping to make sure it is production-ready. I don't have a whole lot of time right now, but I will be rewriting apt-proxy as time permits, I will make the ongoing work publically available. Maybe it could be hosted in apt-proxy's CVS. > > And to start contributing on this, attached goes some scripts I wrote > > to import .deb into a pool structure without Package lists. > > Thanks for the scripts. It's a nice idea, to build the path using dpkg > --info and one I had not thought of. It only works for mirrors that use the > current Debian mirror structure, and would have to be changed if the mirror > structure were to be changed by the FTP maintainers in the future. > > The time taken to write e.g. the apt-proxy-import script was probably less > than half the total time it took to get it into the package, because I had > to write documentation and do stuff like handling typical errors and provide > --help text. > > It would still be a fair amount of work to get your scripts finished. I'm > not sure whether you are providing them for comment or as a contribution to > the package. If you want to help out, I would need you to help get things > nearer to a completed state. Otherwise, I would be comitting myself to a > lot of extra work which I probably wouldn't have the time to do. It was meant as a contribution, but I accept the criticism, how about integrating it into "apt-proxy-import --mirror=pool", so if the file is not in the current listings it will use pool mirror structure. I would also update the manpage. Hmm, some places at "apt-proxy-import" read "echo" where is should be "echo -e" so it interprets escaped secuences, I will fix that also. > Does that sound OK? I've set up an #apt-proxy channel on openprojects IRC > so maybe we can all talk there sometime. My nick in openprojects IRC is 'ranty', feel free to invite me over to #apt-proxy whenever you see me around. more TODO stuff :) --------------- About apt-proxy, I would like it to generate package lists with the package it already has, maybe even with a different release name. deb http://apt-proxy.x.x:9999/main/ unstable #same as it is now deb http://apt-proxy.x.x:9999/main/ mirror #Packages would be generated # by apt-proxy By making mirror a different release we could use APT::Default-Release "mirror" to try to stay with those packages and not download new ones while being able to install new stuff. apt-move those a preaty good job generating packages eficiently, we could take a look. I know this is not to happen soon, I just write it here for the record. Take care ranty -- --- Manuel Estrada Sainz <ra...@de...> <ra...@at...> ------------------------ <ra...@so...> --------------------------------- God grant us the serenity to accept the things we cannot change, courage to change the things we can, and wisdom to know the difference. |
From: Chris H. <chr...@ni...> - 2002-03-05 18:30:56
|
Hi Manuel, sorry I took a while to get back to you. On Sat, Mar 02, 2002 at 12:58:09PM +0100, Manuel Estrada Sainz wrote: > On Thu, Feb 28, 2002 at 09:14:32PM +0100, Chris Halls wrote: >=20 > How about 'word by word' rewrite of apt-proxy in perl so we don't have > to worry about logic errors and once done we will have plenty of space > for improvements and performance enhancements. >=20 > By 'word by word' rewrite I mean keeping all funcion names and as much > of the structure as posible. I don't know how much would be left after doing that :-) There are quite a few of the functions that are very rsync specific and could do with cleaning up, although I guess it would still at least provide a framework. If doing the rewrite myself, I guess I would consider starting it like that. > > At the moment, I think aptcached looks the most promising. The code > > looks nice and clean but I haven't actually installed it to give it a > > test run yet. >=20 > Well, having apt-proxy around makes me think that aptcached looks a bit > 'short minded'. And anyway, if we do the rewrite proposed above we > could take the nice parts of aptcached and use the in apt-proxy. >=20 > Reading the goal's of aptcached I think that we could convince Jason > with a 'perl improved'/'http capable' version of apt-proxy. Specially > if there is room for some of his code I guess :) >=20 > If I do the rewrite and it works, would you adopt it as upstream? I got hold of Jason today on IRC and we discussed the whole thing. He's joined this list so we can discuss here. It turns out that there are some issues with aptcached that could cause problems: - aptcached depends on POE. It was so Jason could avoid having to do events loops, but POE is still oficially alpha, is not in Potato and has a large runtime overhead, Jason reckoned around 2MB. I'm not against using libraries to help simplify coding and make it more maintainable, but I think that POE is too big a price to pay. Jason understands that and is open to dropping POE, as long as someone else does the rewrite :-) - It doesn't mirror the directory structure. That certainly simplifies the code but has scalability problems and you don't end up with a partial mirror. - HTTP 1.1 keep-alives aren't working (yet). So it looks like a fair amount must be rewritten anyway. Jason said he's open to help with whatever we decide to do, which I'm very grateful for. If you have the time to work on a rewrite that's great by me - I'd rather have several people involved than just me. It helps keep bugs shallow and for the program to grow faster. The only thing I would ask for is to make sure others can work with the code too. In particular, documenting what code is supposed to do and helping to make sure it is production-ready. > And to start contributing on this, attached goes some scripts I wrote > to import .deb into a pool structure without Package lists. Thanks for the scripts. It's a nice idea, to build the path using dpkg --info and one I had not thought of. It only works for mirrors that use the current Debian mirror structure, and would have to be changed if the mirror structure were to be changed by the FTP maintainers in the future. The time taken to write e.g. the apt-proxy-import script was probably less than half the total time it took to get it into the package, because I had to write documentation and do stuff like handling typical errors and provide --help text. =20 It would still be a fair amount of work to get your scripts finished. I'm not sure whether you are providing them for comment or as a contribution to the package. If you want to help out, I would need you to help get things nearer to a completed state. Otherwise, I would be comitting myself to a lot of extra work which I probably wouldn't have the time to do. Does that sound OK? I've set up an #apt-proxy channel on openprojects IRC so maybe we can all talk there sometime. Thanks, Chris (who's glad things are picking up) --=20 Chris Halls | Frankfurt, Germany Yahoo:hagga12000 ICQ:36940860 MSN:chr...@ni... |
From: Chris H. <chr...@ni...> - 2002-03-05 17:31:40
|
On Fri, Mar 01, 2002 at 09:39:53AM +1100, Jeremy Lunn wrote: > It's rumoured that apt-proxy was written in shell because someone made a > bet with Rusty that it couldn't be done. So he did it. *grin* Ah, now we have a reason why apt-proxy is a shell script that actually makes sense :-D > IMO the best way to write a proxy for Debian would be to write an > apt-handler that knows it's going through a proxy. And also a server > implementation. I see some disadvantages: - It only works with apt. I don't know about other people, but I often use apt-proxy during a new install. debootstrap uses busybox wget. - Every box has to have client sofware installed and kept up to date What advantage would it bring? > I'm not sure what other projects are doing but I don't really have a > need for a proxy because I have more bandwidth than hard drive space and > I don't like to let my downloads full below 1.0 x the average user > download (my ISP gives a limit of 10 x the age user download). Lucky you :) Chris |
From: Manuel E. S. <ra...@de...> - 2002-03-02 11:58:13
|
On Thu, Feb 28, 2002 at 09:14:32PM +0100, Chris Halls wrote: > Hi Manual, > > On Thu, Feb 28, 2002 at 08:10:24PM +0100, Manuel Estrada Sainz wrote: > > Now to the point, what would be the drawbacks of using perl, python or > > even C instead of shell script? At some point I may try to rewrite > > apt-proxy using one of those and I would like to know what you think > > about it. > > I do not really see apt-proxy in its present form as having a shelf life > beyond the Woody release, because I expect (and hope) that another > implementation will have caught up with apt-proxy, in terms of stability, > ease of installation, and features. There isn't anything ready in all areas > yet, so apt-proxy 1.x lives on. How about 'word by word' rewrite of apt-proxy in perl so we don't have to worry about logic errors and once done we will have plenty of space for improvements and performance enhancements. By 'word by word' rewrite I mean keeping all funcion names and as much of the structure as posible. > It would probably be a good idea for me to put this info up on the home > page, together with links to all the other projects. Please do. > I would have thought it would be better for people to work together on > one of these than have so many little projects. And I believe that apt-proxy should be the one, it is the most mature, and it goes right to the point. > At the moment, I think aptcached looks the most promising. The code > looks nice and clean but I haven't actually installed it to give it a > test run yet. Well, having apt-proxy around makes me think that aptcached looks a bit 'short minded'. And anyway, if we do the rewrite proposed above we could take the nice parts of aptcached and use the in apt-proxy. Reading the goal's of aptcached I think that we could convince Jason with a 'perl improved'/'http capable' version of apt-proxy. Specially if there is room for some of his code I guess :) If I do the rewrite and it works, would you adopt it as upstream? And to start contributing on this, attached goes some scripts I wrote to import .deb into a pool structure without Package lists. Take care ranty -- --- Manuel Estrada Sainz <ra...@de...> <ra...@at...> ------------------------ <ra...@so...> --------------------------------- God grant us the serenity to accept the things we cannot change, courage to change the things we can, and wisdom to know the difference. |
From: <je...@au...> - 2002-02-28 22:40:07
|
On Thu, Feb 28, 2002 at 08:10:24PM +0100, Manuel Estrada Sainz wrote: > Now to the point, what would be the drawbacks of using perl, python or > even C instead of shell script? At some point I may try to rewrite > apt-proxy using one of those and I would like to know what you think > about it. It's rumoured that apt-proxy was written in shell because someone made a bet with Rusty that it couldn't be done. So he did it. IMO the best way to write a proxy for Debian would be to write an apt-handler that knows it's going through a proxy. And also a server implementation. I'm not sure what other projects are doing but I don't really have a need for a proxy because I have more bandwidth than hard drive space and I don't like to let my downloads full below 1.0 x the average user download (my ISP gives a limit of 10 x the age user download). -- Jeremy Lunn Melbourne, Australia http://www.jabber.org/ - the next generation of Instant Messaging. |
From: Chris H. <chr...@ni...> - 2002-02-28 20:14:47
|
Hi Manual, On Thu, Feb 28, 2002 at 08:10:24PM +0100, Manuel Estrada Sainz wrote: > First of all thanks a lot for apt-proxy, its a great program and an > even better idea. :-) > I what to make clear that I don't what to start a flamewar and I'm not > trying to force anything on anyone. > > apt-proxy runs quite slowly on the SGI Indy that I want to use as > proxy, and I realized that it is written in shell script using the > "Little powerful tools" philosophy, which has a poor performance. I > never thought that a proxy could be written in shell script, quite > impressive. > > Now to the point, what would be the drawbacks of using perl, python or > even C instead of shell script? At some point I may try to rewrite > apt-proxy using one of those and I would like to know what you think > about it. As you say, the idea itself is great, but apt-proxy is not all that wonderful. apt-proxy itself was almost unusable during 2001, because changes to the Debian archives (the pool structure) caused it to stop working as intended. During that period I think quite a few people tried it, found it was very difficult to set up and use, and some had a go at writing their own. I know of at least 3 other similar projects. I tried debproxy over on Sourceforge, but found the code to be too buggy to be fun to use. When apt-proxy came up for adoption in Debian I took the chance to incorporate the fixes I had already made into the Debian distribution, with a view to making sure that Woody had a working apt-proxy. I do not really see apt-proxy in its present form as having a shelf life beyond the Woody release, because I expect (and hope) that another implementation will have caught up with apt-proxy, in terms of stability, ease of installation, and features. There isn't anything ready in all areas yet, so apt-proxy 1.x lives on. In the meantime I'm just having fun getting experience as a package maintainer, upstream developer and doing something useful at the same time :) It would probably be a good idea for me to put this info up on the home page, together with links to all the other projects. I would have thought it would be better for people to work together on one of these than have so many little projects. At the moment, I think aptcached looks the most promising. The code looks nice and clean but I haven't actually installed it to give it a test run yet. You can have a look here: http://talk.trekweb.com/~jasonb/software.shtml Chris -- Chris Halls | Frankfurt, Germany <>< Yahoo:hagga12000 ICQ:36940860 MSN:chr...@ni... |
From: Manuel E. S. <ra...@de...> - 2002-02-28 19:10:33
|
Hello, First of all thanks a lot for apt-proxy, its a great program and an even better idea. I what to make clear that I don't what to start a flamewar and I'm not trying to force anything on anyone. apt-proxy runs quite slowly on the SGI Indy that I want to use as proxy, and I realized that it is written in shell script using the "Little powerful tools" philosophy, which has a poor performance. I never thought that a proxy could be written in shell script, quite impressive. Now to the point, what would be the drawbacks of using perl, python or even C instead of shell script? At some point I may try to rewrite apt-proxy using one of those and I would like to know what you think about it. Take care ranty -- --- Manuel Estrada Sainz <ra...@de...> <ra...@at...> ------------------------ <ra...@so...> --------------------------------- God grant us the serenity to accept the things we cannot change, courage to change the things we can, and wisdom to know the difference. |
From: Chris H. <chr...@ni...> - 2002-02-26 17:13:17
|
Stable ------ Stable version 1.2.1 is out, and will be uploaded to Debian unstable shortly. Changes: * Reset access time on version n-1 when downloading n, so the cache cleaning algorithm works better (came up in the discussion for bug #131883) * Add Contents-* to list of control files, so apt-proxy works with apt-file. (closes: #134217) * (other minor packaging changes) Development ----------- I have put up the latest development version at SourceForge. I would be very grateful for people to install it and see if they find any problems. New features: * http/ftp backend server support, using wget * A new command, apt-proxy-import, which can import .debs from an existing directory into the apt-cache mirror directory. This is useful when installing apt-proxy for the first time on a machine which already has lots of .debs in the /var/cache/apt/archives directory, and for combining with offline retrieval methods. Web page -------- apt-proxy finally has a web page. This was in response to a large number of hits the day I announced 1.2.0 on SourceForge but no downloads :-o URL: http://apt-proxy.sourceforge.net aptable archive --------------- You can now try out the latest and greatest apt-proxy from the comfort of your favourite apt frontend :-) Full details available in the README at: http://apt-proxy.sourceforge.net/apt-proxy/README As ever, have fun! Chris |
From: Chris H. <chr...@ni...> - 2002-02-06 11:54:07
|
apt-proxy 1.2.0 has been released. =20 This version features: - More advanced cache management with MAX_VERSIONS and file corruption=20 detection - New log and config file command line options - Bugfixes to streaming code and cache management - ready to go out of the box: user, cache directory and logfile are created during first installation - documentation updates - debian packaging directory included in source You can download it from Sourceforge or from Debian unstable, once it was been uploaded there. Many thanks to Martin Schwenke, Gerhard Muntingh, Stephen Rothwell, Thorsten Gunkel and others for their contributions and improvements. Enjoy! - Chris Halls ---- Changelog: Date: Wed, 6 Feb 2002 12:20:12 +0100 Source: apt-proxy Binary: apt-proxy Architecture: source all Version: 1.2.0-1 Distribution: unstable Urgency: low Maintainer: Chris Halls <chr...@ni...> Changed-By: Chris Halls <chr...@ni...> Description:=20 apt-proxy - Debian automatic partial mirror builder Closes: 77929 81746 99259 109308 132439 132493 Changes:=20 apt-proxy (1.2.0-1) unstable; urgency=3Dlow . * New upstream release - New config file paramter, MAX_VERSIONS, to limit the number of packa= ge versions to keep in the cache directory. Thanks to Martin Schwenke. - New parameters for apt-proxy for runtime setting of config file and = log file locations. Thanks to Gerhard Muntingh. (closes: #77929) - Use the package filelists logic from Martin Schwenke to send the siz= e of package files before downloading, meaning connection keep-alive logi= c can be used. - Fix the problem of files being corrupted during streaming (often see= n as a MD5 sum error which would go away when the file was requested from apt-proxy again), by switching back to using dd instead of tail. - Check for corrupted .deb and .gz files in the cache before sending them to the client. (closes: #132493) - Bye bye .diff: The Sourceforge project is now up to date and include= s the debian packaging. * Really rename main archive name, not just put it in the changelog * Add logrotate script * Clean up debian/rules and use DH_COMPAT=3D3 * If setting up apt-proxy for the first time, do the following: - Create a user, aptproxy (closes: #99259) - Add an entry to inetd.conf, without disabling it (closes: #81746) - Create a log file owned by the user (closes: #109308) This is currently first-install only. I plan to introduce an upgrade = path for existing users using debconf when I have time. * Move installation instructions that are no longer necessary when using= the packge into a seperate file, INSTALL. * Remove extra manual installation instructions that are no longer neces= sary from README (closes: #132439) * Merge remaining information from README about finding rsync servers in= to apt-proxy.conf manpage. * Add UK rsync servers to default apt-proxy.conf, thanks to Simon Huggin= s. Files:=20 878439fa02a828b31c63f8c7ef1cbddd 608 admin extra apt-proxy_1.2.0-1.dsc 1a1ab801375bed63aad9d82ec415dacb 19665 admin extra apt-proxy_1.2.0.orig.ta= r.gz ec36d4f7ced8516a52e8f7b5fe8137ca 20 admin extra apt-proxy_1.2.0-1.diff.gz d83be46ba1a0d1731566b5d6c625cb4f 22874 admin extra apt-proxy_1.2.0-1_all.d= eb ---- SourceForge location: Project "apt-proxy" ('apt-proxy') has released the new version of package 'apt-proxy'. You can download it from SourceForge by following this link: <http://sourceforge.net/project/showfiles.php?group_id=3D12078&release_id= =3D73659> or browse Release Notes and ChangeLog by visiting this link: <http://sourceforge.net/project/shownotes.php?release_id=3D73659> --=20 Chris Halls | Frankfurt, Germany |
From: Simon H. <hu...@ea...> - 2002-01-07 10:36:41
|
On Mon, Jan 07, 2002 at 11:06:56AM +0100, Chris Halls wrote: > This release incorporates the latest changes in the Sourceforge CVS, > plus several other changes and fixes. This release concentrates on > making apt-proxy work properly and fixing packaging bugs. The next > revision will deal with making the configuration of the user and cache > directories simpler. Compared to the previous version in Debian it is incredible the difference. The version in unstable doesn't seem to hang at 99% every time, doesn't seem to overdownload for one package only to stall later and seems to be a lot quicker than the previous version. I had played with CVS a few months ago and not got anything better. Thanks for fixing this. -- Simon Huggins \ Le doute est le commencement de la sagesse. \ http://www.earth.li/~huggie/ htag.pl 0.0.19 |
From: Chris H. <chr...@ni...> - 2002-01-07 10:07:14
|
I have uploaded a new version of apt-proxy to debian unstable. You can find it at: http://ftp.debian.org/debian/pool/main/a/apt-proxy This release incorporates the latest changes in the Sourceforge CVS, plus several other changes and fixes. This release concentrates on making apt-proxy work properly and fixing packaging bugs. The next revision will deal with making the configuration of the user and cache directories simpler. Unfortunately, the SourceForge CVS does not contain the new changes: I have tried to contact the original authors, but have had no response except for a brief ack from Stephen Rothwell in September. The details of the new release are as follows: apt-proxy (1.1.2-1) unstable; urgency=low * New maintainter - thanks to Andrew McMillan for sponsoring. (closes: #123499) * New upstream release (closes: #112029) - Some bug fixes, bashism cleanups - Upstream has been inactive for 6 months now. I have not had any contact since September. * Add Depends: grep-dctrl (closes: #76113, #78256, #114855, #99976, #121456) * Remove unneeded procmail dependency (closes: #76634, #116188) * No longer depend on bash. * Depend on netbase for update-inetd (closes: #75993) * Remove Recommends:ftp-server * Merged in Stephen Rothwell's changes - Use tail and stat if available, which is faster * Made improvements to the script: - Fix keep-alive handling: If apt-proxy knows the file size, the connection is kept open, otherwise it is closed. Also, fix hangs in certain situations by improving locking. (closes: #96517, #80839, #99927, #99948) - Add rsync timeout support in config file (RSYNC_TIMEOUT parameter) * Fix typo in apt-proxy.8, thanks to Uwe Hermann (closes: #116234) * Updated Standards-Version (no changes necessary) |
From: Chris H. <chr...@ni...> - 2001-08-22 16:20:52
|
On Wed, Aug 22, 2001 at 04:44:43PM +0100, Ashley Collins wrote: > I've just installed apt-proxy. When running apt-get pointed at it, the > progress doesn't seem to update. I can see new packages being downloaded in > the apt-proxy log, but apt-get shows the first file it started with. > > The total number of bytes next to the package name in apt-get is updating and > the end of session stats are also updating. > > Has anyone come across this before? I'm using unstable. Unfortunately, yes. You just missed a thread on debian-devel discussing apt-proxy in general. Jason Thomas wrote [1]: > apt-proxy currently has a problem because of the new pools system, it > use to parse the Packages.gz files to figure out file sizes so that it > could stream the download back to the client. This bug causes it to > stream the data but not tell you how much you want, basically hitting > control-C in apt and starting it again will get the file down after > rsync has finished downloading the file. There are several related bugs about the same problem. The most imformative is #80839. The sourceforge CVS contains an attempt at a fix, but it doesn't work completely for me. Apt now moves on to the next file as it should, but it reports new errors instead. Stephen Rothwell wrote [2]: > I have been working on apt-proxy recently and have a version that we have > been using for a while with some success. Could you try > http://www.canb.auug.org.au/~sfr/apt-proxy and let me know if you have any > better luck. If you have GNU tail and stat installed, it will use them to > advantage. Note that the development environment is Linux and bash, so > there may be some dependencies that I am unaware of - altough I am told it > will work using ash as a shell. I haven't tried it out yet. Chris Halls [1] http://lists.debian.org/debian-devel/2001/debian-devel-200108/msg01464.html [2] http://lists.debian.org/debian-devel/2001/debian-devel-200108/msg01480.html |
From: Ashley C. <ash...@mo...> - 2001-08-22 15:46:46
|
Hello All, I've just installed apt-proxy. When running apt-get pointed at it, the progress doesn't seem to update. I can see new packages being downloaded in the apt-proxy log, but apt-get shows the first file it started with. The total number of bytes next to the package name in apt-get is updating and the end of session stats are also updating. Has anyone come across this before? I'm using unstable. Many thanks. Ashley -- "If things seem a bit weird today, you've probably got CAPS on in vi..." |