Thread: RE: [GD-General] Building pak/zip files
Brought to you by:
vexxed72
From: Chris B. <Chr...@ma...> - 2001-12-05 03:03:33
|
I might not have understood your specific problem but... I was going to start looking at zip files sooner or later as well. Currently I just use pkzip to batch build my release files, I was going to do the same thing with the resources as well. Just maintain the assets on the file system as the first search point. Then check in the zip file. Pretty simple. The power is in from code checking the file system first to get more recently updated files, then check the zip file if it doesn't exist on the file system. Doing it this way means your artists won't have to specifically add a file to the zip archive to test it in game. Then each night \ build \ whatever, just add all the updated\new files in to the zip file and delete them from the file system, which can be done in a batch manner with pkzip. I've also written a small tool to rename files to their original name + the date\time so I can keep previous version of the file. This has been invaluable in reverting to previous working versions of things. Let me know if you want this tool \ source etc. It's very simple of course. Chris Brodie > -----Original Message----- > From: Brian Hook [mailto:bri...@py...] > Sent: Wednesday, 5 December 2001 1:26 PM > To: gam...@li... > Subject: [GD-General] Building pak/zip files > > > I'm looking into a way to automate the build of our asset files, which > will probably just be in good old ZIP file format. Currently I'm > storing everything in my EXE as a resource (don't ask), primarily > because I can change what I build into my EXE (e.g. for demo or full > versions) by a simple change of a resource symbol. > > However, for obvious reasons I'd like to move to a ZIP file > that stores > this information, and building one in a relatively error free > way seems > like the way to go. I'll likely rename the ZIP file just to avoid any > obvious tinkering, but otherwise I'm generally unconcerned by tweaking > of the assets. > > The obvious thing to do is to run make with various parameters, e.g. > "make demo" or "make full" to build the various ZIPs. Is this what > others are doing, or are any of you folks doing something spiffier? > > Brian > > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > NOTICE This e-mail and any attachments are confidential and may contain copyright material of Macquarie Bank or third parties. If you are not the intended recipient of this email you should not read, print, re-transmit, store or act in reliance on this e-mail or any attachments, and should destroy all copies of them. Macquarie Bank does not guarantee the integrity of any emails or any attached files. The views or opinions expressed are the author's own and may not reflect the views or opinions of Macquarie Bank. |
From: Brian H. <bri...@py...> - 2001-12-05 17:08:49
|
> I was going to start looking at zip files sooner or later as > well. Currently I just use pkzip to batch build my release > files, I was going to do the same thing with the resources as > well. Just maintain the assets on the file system as the > first search point. Then check in the zip file. Pretty simple. Right, this is what we plan on doing. We have an abstract FileSystem class, and derived from it are a DiskFileSystem, ZIPFileSystem and a wrapper called a ChainedFileSystem. The latter does the "Check disk, then check ZIP" type thing (you register FileSystems with it in order of priority). The issue isn't the ZIP file itself, it's automating the build. I could make a batch file or a make file that builds the appropriate ZIP, but I was hoping there was maybe a slightly more elaborate tool than just doing a Custom Build step in MSDEV and calling a batch file =) Brian |
From: Grills, J. <jg...@so...> - 2001-12-05 20:04:28
|
One thing we've found is that if you can order the data files in the ZIP in the same order as your game will load them (at least when that order is known, like during application start up or level-based games), you can significantly decrease the startup time of your application. This assumes your table-of-contents is preloaded so you don't have to seek to it between files. Unfortunately, I don't know of any tools to let you tweak ZIP files in this manner, so we have our own format and tools. Another advantage we get with our own format is we may be able to duplicate data files within our ZIP-like file in multiple places to further decrease load time by seeking to the nearest one in the ZIP. However, we haven't spent the time to write nice GUI tools like WinZip, which I would really like. I would also suggest supporting an arbitrary set of directories and ZIP files to search. That way you never need to make ZIP files during development, and can release new ZIP files to patch data without differential patching the original ZIP. Jeff Grills Star Wars Galaxies Technical Director, Austin Studio Sony Online Entertainment Inc. -----Original Message----- From: Brian Hook [mailto:bri...@py...] Sent: Tuesday, December 04, 2001 8:26 PM To: gam...@li... Subject: [GD-General] Building pak/zip files I'm looking into a way to automate the build of our asset files, which will probably just be in good old ZIP file format. Currently I'm storing everything in my EXE as a resource (don't ask), primarily because I can change what I build into my EXE (e.g. for demo or full versions) by a simple change of a resource symbol. However, for obvious reasons I'd like to move to a ZIP file that stores this information, and building one in a relatively error free way seems like the way to go. I'll likely rename the ZIP file just to avoid any obvious tinkering, but otherwise I'm generally unconcerned by tweaking of the assets. The obvious thing to do is to run make with various parameters, e.g. "make demo" or "make full" to build the various ZIPs. Is this what others are doing, or are any of you folks doing something spiffier? Brian _______________________________________________ Gamedevlists-general mailing list Gam...@li... https://lists.sourceforge.net/lists/listinfo/gamedevlists-general |
From: Chris B. <Chr...@ma...> - 2001-12-05 23:12:50
|
Elaborate? I'm not sure I follow. I'm a lone developer with no art team but I still try to work as if I was a small team. As such a team would want independence between the art packaging tools and the coder releases. Perhaps I've missed something though. Some idea's I've had overnight about this topic: Zip file tools don't generally give you control over what gets compressed and what doesn't(I think). This means that some things that you don't want compressed such as jpeg's or mp3's will need to be double compressed, slowing load times on these files. Then again you can store all files uncompressed, so the zip file acts like a pak file. In this case some simple things like wav's or large xml documents(my level files for example) won't be compressed. It seems the solution to this problem is in either creating(hopefully finding) a custom zip packing tool that allows you to specify 'store' or 'compressed' on each file or to use a custom file format, which for now I'm resisting. In both cases memory mapped files seem to be the best access method for pakfile style access. I've heard that the combination of pakfiles + memory mapped files can give you effective loading speed increases of nearly 200% above the normal C\C++ std functions. Chris > -----Original Message----- > From: Brian Hook [mailto:bri...@py...] > Sent: Thursday, 6 December 2001 4:09 AM > To: gam...@li... > Subject: RE: [GD-General] Building pak/zip files > > > > I was going to start looking at zip files sooner or later as > > well. Currently I just use pkzip to batch build my release > > files, I was going to do the same thing with the resources as > > well. Just maintain the assets on the file system as the > > first search point. Then check in the zip file. Pretty simple. > > Right, this is what we plan on doing. We have an abstract FileSystem > class, and derived from it are a DiskFileSystem, ZIPFileSystem and a > wrapper called a ChainedFileSystem. The latter does the "Check disk, > then check ZIP" type thing (you register FileSystems with it > in order of > priority). > > The issue isn't the ZIP file itself, it's automating the > build. I could > make a batch file or a make file that builds the appropriate > ZIP, but I > was hoping there was maybe a slightly more elaborate tool than just > doing a Custom Build step in MSDEV and calling a batch file =) > > Brian > > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > NOTICE This e-mail and any attachments are confidential and may contain copyright material of Macquarie Bank or third parties. If you are not the intended recipient of this email you should not read, print, re-transmit, store or act in reliance on this e-mail or any attachments, and should destroy all copies of them. Macquarie Bank does not guarantee the integrity of any emails or any attached files. The views or opinions expressed are the author's own and may not reflect the views or opinions of Macquarie Bank. |
From: Tom S. <tsp...@mo...> - 2001-12-05 23:41:29
|
> Zip file tools don't generally give you control over what > gets compressed and what doesn't(I think). Actually i don't think this is necessarily true nor that you cannot specify the order of the contents of the zip file. Check out http://www.winzip.com/wzcline.htm which adds command line support to WinZip. I believe the default operation of in calling winzip via the command line is to add files ( -a ) to an existing archive. With this in mind i think that adding the files to a zip one at a time with the compression options set to your liking ( -e ) would do everything that's been discussed here. The performance may be poor calling winzip once for each file to be added, so if this is a problem look into using "list files" to batch multiple adds into a single call. Tom ----- Original Message ----- From: "Chris Brodie" <Chr...@ma...> To: <gam...@li...> Sent: Wednesday, December 05, 2001 5:11 PM Subject: RE: [GD-General] Building pak/zip files > Elaborate? I'm not sure I follow. > > I'm a lone developer with no art team but I still try to work as if I was a small team. As such a team would want independence between the art packaging tools and the coder releases. Perhaps I've missed something though. > > Some idea's I've had overnight about this topic: > > Zip file tools don't generally give you control over what gets compressed and what doesn't(I think). This means that some things that you don't want compressed such as jpeg's or mp3's will need to be double compressed, slowing load times on these files. > Then again you can store all files uncompressed, so the zip file acts like a pak file. In this case some simple things like wav's or large xml documents(my level files for example) won't be compressed. > > It seems the solution to this problem is in either creating(hopefully finding) a custom zip packing tool that allows you to specify 'store' or 'compressed' on each file or to use a custom file format, which for now I'm resisting. > > In both cases memory mapped files seem to be the best access method for pakfile style access. I've heard that the combination of pakfiles + memory mapped files can give you effective loading speed increases of nearly 200% above the normal C\C++ std functions. > > Chris > > > -----Original Message----- > > From: Brian Hook [mailto:bri...@py...] > > Sent: Thursday, 6 December 2001 4:09 AM > > To: gam...@li... > > Subject: RE: [GD-General] Building pak/zip files > > > > > > > I was going to start looking at zip files sooner or later as > > > well. Currently I just use pkzip to batch build my release > > > files, I was going to do the same thing with the resources as > > > well. Just maintain the assets on the file system as the > > > first search point. Then check in the zip file. Pretty simple. > > > > Right, this is what we plan on doing. We have an abstract FileSystem > > class, and derived from it are a DiskFileSystem, ZIPFileSystem and a > > wrapper called a ChainedFileSystem. The latter does the "Check disk, > > then check ZIP" type thing (you register FileSystems with it > > in order of > > priority). > > > > The issue isn't the ZIP file itself, it's automating the > > build. I could > > make a batch file or a make file that builds the appropriate > > ZIP, but I > > was hoping there was maybe a slightly more elaborate tool than just > > doing a Custom Build step in MSDEV and calling a batch file =) > > > > Brian > > > > > > _______________________________________________ > > Gamedevlists-general mailing list > > Gam...@li... > > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > > > > > NOTICE > This e-mail and any attachments are confidential and may contain copyright material of Macquarie Bank or third parties. If you are not the intended recipient of this email you should not read, print, re-transmit, store or act in reliance on this e-mail or any attachments, and should destroy all copies of them. Macquarie Bank does not guarantee the integrity of any emails or any attached files. The views or opinions expressed are the author's own and may not reflect the views or opinions of Macquarie Bank. > > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general |
From: Thatcher U. <tu...@tu...> - 2001-12-06 14:18:15
|
On Dec 05, 2001 at 05:41 -0600, Tom Spilman wrote: > > Zip file tools don't generally give you control over what > > gets compressed and what doesn't(I think). > > Actually i don't think this is necessarily true It doesn't actually matter -- zip tools generally will choose the best method of compression for each file, so .jpg's or .mpg's will be stored uncompressed. > nor that you cannot > specify the order of the contents of the zip file. Check out > http://www.winzip.com/wzcline.htm which adds command line support to WinZip. Infozip's command-line zip apparently stores things in the order you specify. Also, it's open source under a X-style license, so you can hack stuff in or incorporate the code in your project. There's also zlib from the same project, designed for embedding. We use zlib at Oddworld for unpacking resource files. http://www.info-zip.org/ http://www.gzip.org/zlib/ -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |
From: Brian S. <bs...@mi...> - 2001-12-06 19:40:09
|
Things like WinZip will (by default) try to compress everything, so you would want to set command line flags to disable compression for uncompressible things like sound files. And I can't imagine a zip implementation that wouldn't add files to the end of the zip in the order that you insert them, because the alternative (shifting files down) would be so inefficient. =20 But the worst part of zip files is that the catalog lives at an unspecified distance from the end of the file, so opening an archive means seeking to somewhere near the end and walking around looking for the catalog. If you're really worried about speed, you probably want to have a post-process step where you generate another copy of that catalog in a spot where it can be loaded quickly...like placed on the DVD, right in front of the zip archive. --brian -----Original Message----- From: Thatcher Ulrich [mailto:tu...@tu...]=20 Sent: Thursday, December 06, 2001 6:21 AM To: Tom Spilman Cc: gam...@li... Subject: Re: [GD-General] Building pak/zip files On Dec 05, 2001 at 05:41 -0600, Tom Spilman wrote: > > Zip file tools don't generally give you control over what > > gets compressed and what doesn't(I think). >=20 > Actually i don't think this is necessarily true It doesn't actually matter -- zip tools generally will choose the best method of compression for each file, so .jpg's or .mpg's will be stored uncompressed. > nor that you cannot > specify the order of the contents of the zip file. Check out > http://www.winzip.com/wzcline.htm which adds command line support to WinZip. Infozip's command-line zip apparently stores things in the order you specify. Also, it's open source under a X-style license, so you can hack stuff in or incorporate the code in your project. There's also zlib from the same project, designed for embedding. We use zlib at Oddworld for unpacking resource files. http://www.info-zip.org/ http://www.gzip.org/zlib/ --=20 Thatcher Ulrich <tu...@tu...> http://tulrich.com _______________________________________________ Gamedevlists-general mailing list Gam...@li... https://lists.sourceforge.net/lists/listinfo/gamedevlists-general |
From: Gareth L. <GL...@cl...> - 2001-12-06 20:47:31
|
Hmm, you might want to look at the zip file definition again (http://www.pkware.com/support/appnote.html). If you seek to the end of the file, make sure you don't have comments or Zip64 code ( Both a logical requirement for a game based zip file ) then the last 16 bytes will be: <quote> number of this disk: (2 bytes) The number of this disk, which contains central directory end record. If an archive is in zip64 format and the value in this field is 0xFFFF, the size will be in the corresponding 4 byte zip64 end of central directory field. number of the disk with the start of the central directory: (2 bytes) The number of the disk on which the central directory starts. If an archive is in zip64 format and the value in this field is 0xFFFF, the size will be in the corresponding 4 byte zip64 end of central directory field. total number of entries in the central dir on this disk: (2 bytes) The number of central directory entries on this disk. If an archive is in zip64 format and the value in this field is 0xFFFF, the size will be in the corresponding 8 byte zip64 end of central directory field. total number of entries in the central dir: (2 bytes) The total number of files in the .ZIP file. If an archive is in zip64 format and the value in this field is 0xFFFF, the size will be in the corresponding 8 byte zip64 end of central directory field. size of the central directory: (4 bytes) The size (in bytes) of the entire central directory. If an archive is in zip64 format and the value in this field is 0xFFFFFFFF, the size will be in the corresponding 8 byte zip64 end of central directory field. offset of start of central directory with respect to the starting disk number: (4 bytes) Offset of the start of the central directory on the disk on which the central directory starts. If an archive is in zip64 format and the value in this field is 0xFFFFFFFF, the size will be in the corresponding 8 byte zip64 end of central directory field. </quote> Also you should be able to assume that you don't use multidisk spanning. _____________________ Regards, Gareth Lewin Lead Programmer Climax Development Fareham Heights Standard Way Fareham P016 8XT > -----Original Message----- > From: Brian Sharon [mailto:bs...@mi...] > Sent: 06 December 2001 18:47 > To: Thatcher Ulrich; Tom Spilman > Cc: gam...@li... > Subject: RE: [GD-General] Building pak/zip files > > > Things like WinZip will (by default) try to compress > everything, so you > would want to set command line flags to disable compression for > uncompressible things like sound files. > > And I can't imagine a zip implementation that wouldn't add > files to the > end of the zip in the order that you insert them, because the > alternative (shifting files down) would be so inefficient. > > But the worst part of zip files is that the catalog lives at an > unspecified distance from the end of the file, so opening an archive > means seeking to somewhere near the end and walking around looking for > the catalog. If you're really worried about speed, you > probably want to > have a post-process step where you generate another copy of > that catalog > in a spot where it can be loaded quickly...like placed on the > DVD, right > in front of the zip archive. > > --brian > > -----Original Message----- > From: Thatcher Ulrich [mailto:tu...@tu...] > Sent: Thursday, December 06, 2001 6:21 AM > To: Tom Spilman > Cc: gam...@li... > Subject: Re: [GD-General] Building pak/zip files > > On Dec 05, 2001 at 05:41 -0600, Tom Spilman wrote: > > > Zip file tools don't generally give you control over what > > > gets compressed and what doesn't(I think). > > > > Actually i don't think this is necessarily true > > It doesn't actually matter -- zip tools generally will choose the best > method of compression for each file, so .jpg's or .mpg's will be > stored uncompressed. > > > nor that you cannot > > specify the order of the contents of the zip file. Check out > > http://www.winzip.com/wzcline.htm which adds command line support to > WinZip. > > Infozip's command-line zip apparently stores things in the order you > specify. Also, it's open source under a X-style license, so you can > hack stuff in or incorporate the code in your project. There's also > zlib from the same project, designed for embedding. We use zlib at > Oddworld for unpacking resource files. > > http://www.info-zip.org/ > http://www.gzip.org/zlib/ > > -- > Thatcher Ulrich <tu...@tu...> > http://tulrich.com > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > |
From: Brian S. <bs...@mi...> - 2001-12-06 22:03:37
|
Ooops, you're right, I was going off the code in "unzip.c" that ships = with zlib (in the contrib\minizip directory) - it seeks backwards = because it's assuming that there might be a comment field. But like you = say, you should be able to assume that you don't have comments in a = game-based zip. --brian -----Original Message----- From: Gareth Lewin [mailto:GL...@cl...]=20 Sent: Thursday, December 06, 2001 12:49 PM To: gam...@li... Subject: RE: [GD-General] Building pak/zip files Hmm, you might want to look at the zip file definition again = (http://www.pkware.com/support/appnote.html).=20 If you seek to the end of the file, make sure you don't have comments or = Zip64 code ( Both a logical requirement for a game based zip file ) then = the last 16 bytes will be: <quote>=20 number of this disk: (2 bytes)=20 The number of this disk, which contains central directory end record. If = an archive is in zip64 format and the value in this field is 0xFFFF, the = size will be in the corresponding 4 byte zip64 end of central directory = field. number of the disk with the start of the central directory: (2 bytes)=20 The number of the disk on which the central directory starts. If an = archive is in zip64 format and the value in this field is 0xFFFF, the = size will be in the corresponding 4 byte zip64 end of central directory = field. total number of entries in the central dir on this disk: (2 bytes)=20 The number of central directory entries on this disk. If an archive is = in zip64 format and the value in this field is 0xFFFF, the size will be = in the corresponding 8 byte zip64 end of central directory field. total number of entries in the central dir: (2 bytes)=20 The total number of files in the .ZIP file. If an archive is in zip64 = format and the value in this field is 0xFFFF, the size will be in the = corresponding 8 byte zip64 end of central directory field. size of the central directory: (4 bytes)=20 The size (in bytes) of the entire central directory. If an archive is in = zip64 format and the value in this field is 0xFFFFFFFF, the size will be = in the corresponding 8 byte zip64 end of central directory field. offset of start of central directory with respect to the starting disk = number: (4 bytes)=20 Offset of the start of the central directory on the disk on which the = central directory starts. If an archive is in zip64 format and the value = in this field is 0xFFFFFFFF, the size will be in the corresponding 8 = byte zip64 end of central directory field. </quote>=20 Also you should be able to assume that you don't use multidisk spanning. = _____________________=20 Regards, Gareth Lewin=20 Lead Programmer=20 Climax Development=20 Fareham Heights=20 Standard Way=20 Fareham=20 P016 8XT=20 > -----Original Message-----=20 > From: Brian Sharon [mailto:bs...@mi...]=20 > Sent: 06 December 2001 18:47=20 > To: Thatcher Ulrich; Tom Spilman=20 > Cc: gam...@li...=20 > Subject: RE: [GD-General] Building pak/zip files=20 >=20 >=20 > Things like WinZip will (by default) try to compress=20 > everything, so you=20 > would want to set command line flags to disable compression for=20 > uncompressible things like sound files.=20 >=20 > And I can't imagine a zip implementation that wouldn't add=20 > files to the=20 > end of the zip in the order that you insert them, because the=20 > alternative (shifting files down) would be so inefficient.=A0=20 >=20 > But the worst part of zip files is that the catalog lives at an=20 > unspecified distance from the end of the file, so opening an archive=20 > means seeking to somewhere near the end and walking around looking for = > the catalog.=A0 If you're really worried about speed, you=20 > probably want to=20 > have a post-process step where you generate another copy of=20 > that catalog=20 > in a spot where it can be loaded quickly...like placed on the=20 > DVD, right=20 > in front of the zip archive.=20 >=20 > --brian=20 >=20 > -----Original Message-----=20 > From: Thatcher Ulrich [mailto:tu...@tu...]=20 > Sent: Thursday, December 06, 2001 6:21 AM=20 > To: Tom Spilman=20 > Cc: gam...@li...=20 > Subject: Re: [GD-General] Building pak/zip files=20 >=20 > On Dec 05, 2001 at 05:41 -0600, Tom Spilman wrote:=20 > > > Zip file tools don't generally give you control over what=20 > > > gets compressed and what doesn't(I think).=20 > >=20 > >=A0=A0=A0=A0 Actually i don't think this is necessarily true=20 >=20 > It doesn't actually matter -- zip tools generally will choose the best = > method of compression for each file, so .jpg's or .mpg's will be=20 > stored uncompressed.=20 >=20 > > nor that you cannot=20 > > specify the order of the contents of the zip file.=A0 Check out=20 > > http://www.winzip.com/wzcline.htm which adds command line support to = > WinZip.=20 >=20 > Infozip's command-line zip apparently stores things in the order you=20 > specify.=A0 Also, it's open source under a X-style license, so you can = > hack stuff in or incorporate the code in your project.=A0 There's also = > zlib from the same project, designed for embedding.=A0 We use zlib at=20 > Oddworld for unpacking resource files.=20 >=20 > http://www.info-zip.org/=20 > http://www.gzip.org/zlib/=20 >=20 > --=20 > Thatcher Ulrich <tu...@tu...>=20 > http://tulrich.com=20 >=20 > _______________________________________________=20 > Gamedevlists-general mailing list=20 > Gam...@li...=20 > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general=20 >=20 > _______________________________________________=20 > Gamedevlists-general mailing list=20 > Gam...@li...=20 > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general=20 >=20 |
From: Roland <ro...@wi...> - 2001-12-07 21:33:53
|
Hey ZIP gurus out there.... I wrote some of these 'file-system-in-a-zip' classes myself, but one thing bothered me a bit: according to the AppNote.txt there are several compression methods which could potentially be used by WinZip (et altera). The most common apparently are 0 (no compression) and 8 (deflate). What about the others? Is there code available which deals with them? Is it necessary to deal with them? I'm asserting wildly if the method is unknown (other than 0 or 8) and before I use a resource ZIP/PAK in release mode, I verify each file to be accessible. Of course this could all be avoided if I would generate the ZIP/PAK file myself and make sure that only 0 and 8 are used... which would mean YATTW (yet another tool to write)... What are you guys doing? cu roland > -----Original Message----- > From: gam...@li... > [mailto:gam...@li...]On > Behalf Of > Brian Sharon > Sent: Thursday, December 06, 2001 2:04 PM > To: Gareth Lewin; gam...@li... > Subject: RE: [GD-General] Building pak/zip files > > > Ooops, you're right, I was going off the code in "unzip.c" > that ships with zlib (in the contrib\minizip directory) - > it seeks backwards because it's assuming that there might > be a comment field. But like you say, you should be able > to assume that you don't have comments in a game-based zip. > > --brian > > -----Original Message----- > From: Gareth Lewin [mailto:GL...@cl...] > Sent: Thursday, December 06, 2001 12:49 PM > To: gam...@li... > Subject: RE: [GD-General] Building pak/zip files > > Hmm, you might want to look at the zip file definition > again (http://www.pkware.com/support/appnote.html). > If you seek to the end of the file, make sure you don't > have comments or Zip64 code ( Both a logical requirement > for a game based zip file ) then the last 16 bytes will be: > <quote> > number of this disk: (2 bytes) > The number of this disk, which contains central directory > end record. If an archive is in zip64 format and the value > in this field is 0xFFFF, the size will be in the > corresponding 4 byte zip64 end of central directory field. > number of the disk with the start of the central directory: > (2 bytes) > The number of the disk on which the central directory > starts. If an archive is in zip64 format and the value in > this field is 0xFFFF, the size will be in the corresponding > 4 byte zip64 end of central directory field. > total number of entries in the central dir on this disk: (2 bytes) > The number of central directory entries on this disk. If an > archive is in zip64 format and the value in this field is > 0xFFFF, the size will be in the corresponding 8 byte zip64 > end of central directory field. > total number of entries in the central dir: (2 bytes) > The total number of files in the .ZIP file. If an archive > is in zip64 format and the value in this field is 0xFFFF, > the size will be in the corresponding 8 byte zip64 end of > central directory field. > size of the central directory: (4 bytes) > The size (in bytes) of the entire central directory. If an > archive is in zip64 format and the value in this field is > 0xFFFFFFFF, the size will be in the corresponding 8 byte > zip64 end of central directory field. > offset of start of central directory with respect to the > starting disk number: (4 bytes) > Offset of the start of the central directory on the disk on > which the central directory starts. If an archive is in > zip64 format and the value in this field is 0xFFFFFFFF, the > size will be in the corresponding 8 byte zip64 end of > central directory field. > </quote> > Also you should be able to assume that you don't use > multidisk spanning. > _____________________ > Regards, Gareth Lewin > Lead Programmer > Climax Development > Fareham Heights > Standard Way > Fareham > P016 8XT > > > -----Original Message----- > > From: Brian Sharon [mailto:bs...@mi...] > > Sent: 06 December 2001 18:47 > > To: Thatcher Ulrich; Tom Spilman > > Cc: gam...@li... > > Subject: RE: [GD-General] Building pak/zip files > > > > > > Things like WinZip will (by default) try to compress > > everything, so you > > would want to set command line flags to disable compression for > > uncompressible things like sound files. > > > > And I can't imagine a zip implementation that wouldn't add > > files to the > > end of the zip in the order that you insert them, because the > > alternative (shifting files down) would be so inefficient. > > > > But the worst part of zip files is that the catalog lives at an > > unspecified distance from the end of the file, so opening > an archive > > means seeking to somewhere near the end and walking > around looking for > > the catalog. If you're really worried about speed, you > > probably want to > > have a post-process step where you generate another copy of > > that catalog > > in a spot where it can be loaded quickly...like placed on the > > DVD, right > > in front of the zip archive. > > > > --brian > > > > -----Original Message----- > > From: Thatcher Ulrich [mailto:tu...@tu...] > > Sent: Thursday, December 06, 2001 6:21 AM > > To: Tom Spilman > > Cc: gam...@li... > > Subject: Re: [GD-General] Building pak/zip files > > > > On Dec 05, 2001 at 05:41 -0600, Tom Spilman wrote: > > > > Zip file tools don't generally give you control over what > > > > gets compressed and what doesn't(I think). > > > > > > Actually i don't think this is necessarily true > > > > It doesn't actually matter -- zip tools generally will > choose the best > > method of compression for each file, so .jpg's or .mpg's will be > > stored uncompressed. > > > > > nor that you cannot > > > specify the order of the contents of the zip file. Check out > > > http://www.winzip.com/wzcline.htm which adds command > line support to > > WinZip. > > > > Infozip's command-line zip apparently stores things in > the order you > > specify. Also, it's open source under a X-style license, > so you can > > hack stuff in or incorporate the code in your project. > There's also > > zlib from the same project, designed for embedding. We > use zlib at > > Oddworld for unpacking resource files. > > > > http://www.info-zip.org/ > > http://www.gzip.org/zlib/ > > > > -- > > Thatcher Ulrich <tu...@tu...> > > http://tulrich.com > > > > _______________________________________________ > > Gamedevlists-general mailing list > > Gam...@li... > > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > > > > _______________________________________________ > > Gamedevlists-general mailing list > > Gam...@li... > > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > > > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general |
From: Brian H. <bri...@py...> - 2001-12-19 21:03:08
|
So whilst engaged in yet another C/C++ flame fest on another mailing list, I ran into the subject of Eiffel. (quick note: if you have an interest C++'s weaknesses, please check out Ian Joyner's fantabulous paper on the subject at: http://www.elj.com/cppcv3/ ). I won't bore you folks with my typical rant about how languages kinda suck, and development environments suck even worse, and how the two really need to be the same thing (yeah, I know, SmallTalk...), but instead I'll just ask: anyone here actually use Eiffel relatively recently for any projects, and if so, what did you think about it? I don't want this to become a flame fest about languages, but I'm not averse to intelligent discussion on the pros/cons of choosing a language and environment to enhance software robustness, stability and maintainability. Brian |
From: Thatcher U. <tu...@tu...> - 2001-12-19 22:45:58
|
On Dec 19, 2001 at 01:04 -0800, Brian Hook wrote: > > I won't bore you folks with my typical rant about how languages kinda > suck, and development environments suck even worse, and how the two > really need to be the same thing (yeah, I know, SmallTalk...), but > instead I'll just ask: anyone here actually use Eiffel relatively > recently for any projects, and if so, what did you think about it? You probably have these links already, but: Somebody using Eiffel for games (I just downloaded and built it, but the samples are super basic; none of them is an actual game): http://jegl.sourceforge.net/ A student game-development contest at Stanford; the third-place project used OCaml, and maybe there are some other oddball languages in there: http://graphics.stanford.edu/courses/cs248-videogame-competition/cs248-00/ SDL has a lot of different language bindings, so you might find some interesting stuff in these links: http://www.libsdl.org/languages.html > I don't want this to become a flame fest about languages, but I'm not > averse to intelligent discussion on the pros/cons of choosing a language > and environment to enhance software robustness, stability and > maintainability. I'm full of opinions and theories, like everyone else I'm sure. My semi-educated impression is that OCaml is the paragon of advanced features, Dylan is the modern heir to the Common Lisp heritage, Haskell is hardcore functional, Eiffel is hardcore OO, and Java/C# is the half-measure we're all going to be using instead. -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |
From: Brian H. <bri...@py...> - 2001-12-19 23:08:46
|
>I'm full of opinions and theories, like everyone else I'm sure. My semi-educated impression is that >OCaml is the paragon of advanced features, Dylan is the modern heir to the Common Lisp heritage, >Haskell is hardcore functional, Eiffel is hardcore OO, and Java/C# is the half-measure we're all going >to be using instead. I'm shying away from experimental languages that don't have comprehensive IDEs. I'm firmly of the belief that a solid development environment is PART of the language, not an add on, because the language is strictly a syntax, which is only part of the equation. This is one reason I haven't really looked too hard at, say, Sather and OCaml, although they may be far more advanced than the last time I looked at them (18 months ago?). Eiffel popped up on my radar when I read Ian Joyner's "Critique of C++" (and I'm now reading his book, "Objects Unecapsulated"). I became more interested in the overall subject of tools/languages as an impediment to programmer productivity after I started using Obj-C (and Cocoa). My productivity when writing tools with Obj-C/Cocoa is roughly 5-10x that of using MFC/C++. I can put together a reasonably competent OpenGL in an hour, and that's with the completely crappy docs that Apple forces us to sift through. After using Obj-C (which is, to paraphrase a friend of mine, "C with Smalltalk shoved up its ass"), I was thinking "Holy crap, there IS a better way, and this is it!". I guess Carmack was way, WAY ahead of his time, given his use of Obj-C and NextStep some 9 years ago. Eiffel feels to me like a slightly better, more advanced version of Obj-C (er, or Smalltalk) that eschews Obj-C's desire to maintain C syntax compatibility and instead tries to implement ideas like design-by-contract to promote better software engineering. The fundamental problem with using something like Obj-C, SmallTalk or Eiffel is that they're so detached from the hardware that getting high performance is fairly difficult. Not only that, but things like memory management can't be ignored, especially issues like memory fragmentation. I've heard of MUD servers that can run for two days then grind to a halt; not because of a leak, but because their address space has become so fragmented from numerous small allocations and deletions. So C is still important because it lets you manage things very precisely. C++ as a "better C" probably serves a purpose. But for tool development, e.g. landscape editors and things like that, the higher level languages (and environments -- this is absolutely key) see to provide a much faster way to develop real, robust applications that are also easy to maintain. I just don't think C++'s compromising straddle of the no-man's land between "close to the metal" and "high level abstractions" makes much sense, not for large scale projects that typically don't NEED to get close to the metal. And I just find the whole notion of text files that store lines of code horribly quaint. It's the new millenium and we're still worrying about things like forward declarations and circular dependencies. I find that amazing. Brian |
From: Thatcher U. <tu...@tu...> - 2001-12-19 23:52:03
|
On Dec 19, 2001 at 03:09 -0800, Brian Hook wrote: > > >I'm full of opinions and theories, like everyone else I'm sure. My > >semi-educated impression is that OCaml is the paragon of advanced > >features, Dylan is the modern heir to the Common Lisp heritage, > >Haskell is hardcore functional, Eiffel is hardcore OO, and Java/C# > >is > the half-measure we're all going to be using instead. > > I'm shying away from experimental languages that don't have > comprehensive IDEs. I'm firmly of the belief that a solid > development environment is PART of the language, not an add on, > because the language is strictly a syntax, which is only part of the > equation. Well, there's my favorite IDE, emacs. Supports every language you've ever heard of, and every feature (for better or worse). Not very fun to learn, though. > My productivity when writing tools with Obj-C/Cocoa is roughly 5-10x > that of using MFC/C++. Cool... I take it Cocoa is what used to be NextStep... > The fundamental problem with using something like Obj-C, SmallTalk > or Eiffel is that they're so detached from the hardware that getting > high performance is fairly difficult. Not only that, but things > like memory management can't be ignored, especially issues like > memory fragmentation. I've heard of MUD servers that can run for > two days then grind to a halt; not because of a leak, but because > their address space has become so fragmented from numerous small > allocations and deletions. Hm, well, that kind of thing sounds like a bug in the garbage collector, or a crummy allocator. A practical problem, nonetheless. > I just don't think C++'s compromising straddle > of the no-man's land between "close to the metal" and "high level > abstractions" makes much sense, not for large scale projects that > typically don't NEED to get close to the metal. Agreed; although performance does matter, *especially* for large-scale projects. C++ has good compilers and discourages the use of high-level stuff; unfortunately it punishes us programmers. In theory OCaml should be able to make better code than C++ just about always, but in practice I'm sure it often doesn't. > And I just find the whole notion of text files that store lines of code > horribly quaint. It's the new millenium and we're still worrying about > things like forward declarations and circular dependencies. I find that > amazing. I've heard this gripe a few times lately, and I really don't see it. What's wrong with files, really? Isn't it really C++'s declare-before-use that's the problem? -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |
From: Brian H. <bri...@py...> - 2001-12-20 00:12:36
|
> Well, there's my favorite IDE, emacs. Supports every > language you've ever heard of, and every feature (for better > or worse). Not very fun to learn, though. Oooh, ooh, we're treading on dangerous ground here, so, um, I'll try to be polite =) Emacs is great -- I used it exclusively from about 1991 to 1996. In fact, I used OS/2 2.x ONLY because it had a good Emacs. But in the end, Emacs is a text editor, and I think that modern programming is going to move us into custom tools, browsers, source code analyzers, design analyzers, and all manner of stuff that isn't going to be expressible as an Elisp function or an external command line tool. I know that Emacs has managed to do much of this so far through the ingenuity of its design and its users, but I have this suspicion that going forward that the "text only" model isn't going to work. For example, you can't build a Smalltalk like environment in Emacs (at least, I don't think you can, but hell, I'm sure someone will post a link to something similar =) ). > Cool... I take it Cocoa is what used to be NextStep... Yep. > Hm, well, that kind of thing sounds like a bug in the garbage > collector, or a crummy allocator. A practical problem, nonetheless. In the case of Obj-C, there is GC built into the language itself. Cocoa frameworks has a ref counting system (autorelease pools) that is better than nothing, but not a true behind-the-back GC. Eiffel implementations have pluggable GC implementations, so you can choose which one makes sense for you. I think C++ and Obj-C are far more susceptible to memory fragmentation problems (because they expose this information) than, say, Java and Eiffel. So I'm glad you pointed that out, because fragmentation shouldn't be a problem on a closed system that doesn't expose the memory subsystem's guts. There's also the difference between "fast enough" and "as fast as possible". Games are always striving for the latter, but I think that a large class of applications need "fast enough" for 95% of their code and "as fast as possible" for the remainder. People used to complain that Obj-C's message passing (or even C++'s virtual functions or C's vararg functions) were "too slow". But computers are a bit faster than the 68040/25 machines that NextStep shipped on, so I'm not convinced that some of the slowness of SmallTalk is still THAT slow =) > I've heard this gripe a few times lately, and I really don't > see it. What's wrong with files, really? Isn't it really > C++'s declare-before-use that's the problem? The problem is more along the lines of a lack of global "see it all at once" analysis. The in order parsing of various files, some with "declarations" and others with "definitions", is obsolete. We have computers to take away this kind of tedious bookkeeping, but instead I'm the one that has to say "Oh, Foo is a class that I define elsewhere". In an ideal world you have class definitions that are just in some browsable database. You then fill in the blanks on what they do, the class invariants, preconditions, postconditions, etc. and then the tools should be able to spit out a complete specification based on your code. The idea of manual comments becomes an artifact of a by-gone age. Documentation is necessary, but manual comments (comment == "some text I put here to explain this bit of code") should be fairly rare. The build environment can then parse your entire architecture and generate back end code, comments, dependency analysis, etc. all within the confines of one environment that UNDERSTANDS the language, frameworks and tools instead of just PARSING the language and its tools. Emacs can be made to work with a syntax, but it can't be made to understand what a language is trying to do. The common complaint against this is that people wedded to Emacs, make, grep, perl, etc. will find themselves at the mercy of whatever environment is provided. I don't have a problem with this -- I'm completely comfortable in MSDEV, ProjectBuilder, etc. I can make them do what I need them to do, warts and all. But I do at least understand their concerns, even if I do find them misguided =) Brian |
From: Thatcher U. <tu...@tu...> - 2001-12-20 01:33:12
|
On Dec 19, 2001 at 04:13 -0800, Brian Hook wrote: > > Well, there's my favorite IDE, emacs. Supports every > > language you've ever heard of, and every feature (for better > > or worse). Not very fun to learn, though. > > Oooh, ooh, we're treading on dangerous ground here, so, um, I'll try to > be polite =) > > Emacs is great -- I used it exclusively from about 1991 to 1996. In > fact, I used OS/2 2.x ONLY because it had a good Emacs. > > But in the end, Emacs is a text editor, and I think that modern > programming is going to move us into custom tools, browsers, source code > analyzers, design analyzers, and all manner of stuff that isn't going to > be expressible as an Elisp function or an external command line tool. Whoah, I beg to differ. I mean, any computable function is expressible in elisp or even a shell script. I'm not saying it's the best tool for every job... (well, I'm thinking that, but not saying it out loud :) > I know that Emacs has managed to do much of this so far through the > ingenuity of its design and its users, but I have this suspicion > that going forward that the "text only" model isn't going to work. > > For example, you can't build a Smalltalk like environment in Emacs > (at least, I don't think you can, but hell, I'm sure someone will > post a link to something similar =) ). Heh, try "emacs smalltalk" in google... Some people (that even I consider a little crazy) do surprising things with emacs. Like, did you know that emacs (er, XEmacs anyway) is a graphical web browser? > > I've heard this gripe a few times lately, and I really don't > > see it. What's wrong with files, really? Isn't it really > > C++'s declare-before-use that's the problem? > > The problem is more along the lines of a lack of global "see it all at > once" analysis. The in order parsing of various files, some with > "declarations" and others with "definitions", is obsolete. We have > computers to take away this kind of tedious bookkeeping, but instead I'm > the one that has to say "Oh, Foo is a class that I define elsewhere". > > In an ideal world you have class definitions that are just in some > browsable database. You then fill in the blanks on what they do, the > class invariants, preconditions, postconditions, etc. and then the tools > should be able to spit out a complete specification based on your code. I don't really see the difference... the filesystem *is* a browsable database. In a language like Java, the tools *do* spit out a complete spec based on my code; in C++ I write the spec by hand, but as I see it that's a language/compiler issue, independent of what text editor I used or where the code is physically stored. > The idea of manual comments becomes an artifact of a by-gone age. > Documentation is necessary, but manual comments (comment == "some text I > put here to explain this bit of code") should be fairly rare. Huh? Code somehow becomes magically less complex? > The build environment can then parse your entire architecture and > generate back end code, comments, dependency analysis, etc. all within > the confines of one environment that UNDERSTANDS the language, > frameworks and tools instead of just PARSING the language and its tools. > Emacs can be made to work with a syntax, but it can't be made to > understand what a language is trying to do. Hm, sounds like Lisp. If you just write everything in elisp, then you've already got this magical environment :) I'm not seriously saying that's the way to go (I'm not even thinking it this time :). In response to your general point, that good programming environments enhance productivity -- I agree completely. I just disagree with all your examples :) > The common complaint against this is that people wedded to Emacs, make, > grep, perl, etc. will find themselves at the mercy of whatever > environment is provided. I don't have a problem with this -- I'm > completely comfortable in MSDEV, ProjectBuilder, etc. I can make them > do what I need them to do, warts and all. But I do at least understand > their concerns, even if I do find them misguided =) I immersed myself in it a couple years ago, and still use it for some things every day for work, but in my experience MSDEV is a step backwards in a few critical areas. And MSDEV is relatively good for a GUI IDE, as far as I know. -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |
From: Brian H. <bri...@py...> - 2001-12-20 02:02:03
|
> Some people (that even I consider a little crazy) do > surprising things with emacs. Like, did you know that emacs > (er, XEmacs anyway) is a graphical web browser? M-x whatever-dude =) Seriously though, I've known people that were crippled by not having their .el files, and basically wouldn't use a computer until A.) Emacs was installed and B.) they had their custom ELisp files loaded. To me it's like a bassist saying "I can't play unless I have my Wal". Tools matter, but they shouldn't matter so much that you're non-functional without your perfect choice of tool. I have a way easier time understanding the vi freaks, but now we're getting way off topic... > I don't really see the difference... the filesystem *is* a > browsable database. Hey, and "cat > main.c" is a text editor =P > > The idea of manual comments becomes an artifact of a by-gone age. > > Documentation is necessary, but manual comments (comment == > "some text > > I put here to explain this bit of code") should be fairly rare. > > Huh? Code somehow becomes magically less complex? No, but it becomes a bit more structured when the language itself encourages/enforces at least a modicum of discipline. Some call this fascist, I consider it reasonable (and I'm sure when C compilers started issuing more stringent warnings about type safety that people considered that fascist also). > In response to your general point, that good programming > environments enhance productivity -- I agree completely. I > just disagree with all your examples :) Fair enough, as long as you understand my point =) A better way of putting what I'm thinking is that programming needs to become a more integrated process. Radical disconnects between design, analysis, versioning, and authoring, even if they're patched together by the wonder that is Emacs =P, don't help. It should become a process whereby the various tools can take advantage of domain-specific knowledge relating to the underlying language in use. Ugh, I'm still saying this poorly -- how about this: programming should be moving outside the domain of a language/syntax and into the domain of a complete development process. And I'm not talking academic analysis of the problem, but taking into account where the problems occur in the real world. Much of what I liked about Ian Joyner's paper was that he emphasized "Here's a real problem, not a language lawyer issue". For example, small things like "modifier before identifier makes browsing a pain in the ass": a() int; b() void *; c() const char *; Is easier for a human being to scan than: int a(); void *b(); const char *c(); But I digress (again). I'd like to see Extreme Programming as applied to development environments/work flow, not just philosophy and the act of typing in code. I don't want to deal with header files, precompiled headers, and all the ugliness of separate files for interface and implementation. I want the dependency analysis done by a custom tool. I want the language to concentrate on structure, not details. Yadda yadda yadda. Eiffel seems to promise a lot of this (which sparked this thread), but I'm still suspicious of it because I've yet to see someone actually use it then bag on it. Seems too good to be true, much like Obj-C (which, as I've stated, just plain rules all...on OS X). As JC Lawrence mentioned, much of this is the argument of atomic vs. monolithic, but in this case I'm arguing that monolithic is better because the fundamental data has context that all the tools need to access. Yeah, I know, convert to XML and pipe it through the command line in a form all the tools understand. Bah. Brian |
From: Ivan-Assen I. <as...@ha...> - 2001-12-20 10:01:22
|
There are these guys who aggressively pimp their Smalltalk environment http://www.genify.com/ as a vehicle for quick game-dev prototyping - they have extensive DX8 bindings. There is a presentation on the Meltdown 2001 site http://www.microsoft.com/mscorp/corpevents/meltdown2001/presentations.asp Maybe somebody tried it out? I personally have this problem with trying out languages in the real world: I work with a bunch of ultra-conservative colleagues that are barely convinced that we should finally let go the "void *p, DWORD dwSize"-style code, whine at the smallest mention of a template somewhere at the code and tend to blame long STL compilation times personally on me :-) |
From: Thatcher U. <tu...@tu...> - 2001-12-20 14:38:43
|
On Dec 19, 2001 at 06:03 -0800, Brian Hook wrote: > > Seriously though, I've known people that were crippled by not having > their .el files, and basically wouldn't use a computer until A.) Emacs > was installed and B.) they had their custom ELisp files loaded. To me > it's like a bassist saying "I can't play unless I have my Wal". Tools > matter, but they shouldn't matter so much that you're non-functional > without your perfect choice of tool. What's the saying... "A poor craftsman blames his tools." But don't you think platform-specific integrated environments encourage that behavior rather than discourage it? > A better way of putting what I'm thinking is that programming needs to > become a more integrated process. Radical disconnects between design, > analysis, versioning, and authoring, even if they're patched together by > the wonder that is Emacs =P, don't help. It should become a process > whereby the various tools can take advantage of domain-specific > knowledge relating to the underlying language in use. > > Ugh, I'm still saying this poorly -- how about this: programming should > be moving outside the domain of a language/syntax and into the domain of > a complete development process. Well... the problem is, target environments vary. The more my tools know about Windows, the less well adapted they are for Mac or PS2 or Palm. One approach is to make everything in the world use a common run-time --> e.g. Java and .NET. Lots of the .NET verbage sounds like what you're advocating. It's sort of the Lisp Machine legacy... everything understands the code, even our CPU, and the code is data. And I don't think it's necessarily a bad thing; lots of incompatibilities are totally gratuitous and just get in the way. Nevertheless, there are real and legitimate forces that make one-size-fits-all very difficult... like, devices are getting smaller (cellphones, yadda yadda). -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |
From: Brian H. <bri...@py...> - 2001-12-20 17:49:44
|
> What's the saying... "A poor craftsman blames his tools." True, but there's a wide spectrum here from making shitty software to making good software to making good software quickly and robustly. There are many examples of good software written using primitive tools. I feel I can write good software with any language or tools presented; I will rarely blame my tools for preventing me from getting the job done. That said, I will always lust after the better set of tools that let me get my job done better. Les Claypool would sound great on a $99 pawn shop bass, but probably would sound better and (at least to him) play better on one of his fancy, one-of-a-kind instruments. Cocoa and Obj-C were an epiphanous experience for me, because they made programming a GUI application fun, interesting and not tedious. It was very eye opening. I'd like to see that propagate over to general game (or even software) development. > But don't you think platform-specific integrated environments > encourage that behavior rather than discourage it? Possibly, but I guess I'm thinking more along the lines of "tool is the language is the environment", i.e. invent a _system_ of development, not just the component pieces (no pun intended) and bolt them together. > Well... the problem is, target environments vary. The more > my tools know about Windows, the less well adapted they are > for Mac or PS2 or Palm. That's not what I'm suggesting at all! Choice of target environment should be reasonably irrelevant (well, building code for PalmOS is going to require more than just retargeting some compiler options), I'm talking about choice of "source environment". Whether your system outputs JVM, C, asm or obj code isn't particularly relevant except as a deployment concern. I'm only thinking about the creation portion of the process. I suppose you could make an Emacs set of bindings and modes that would support all the things I'd like to see in such an environment, so I'll just shut up now and say "Y'all Emacs people are all freaks!" =) Brian |
From: Thatcher U. <tu...@tu...> - 2001-12-20 18:49:04
|
On Dec 20, 2001 at 09:50 -0800, Brian Hook wrote: > I suppose you could make an Emacs set of bindings and modes that would > support all the things I'd like to see in such an environment, so I'll > just shut up now and say "Y'all Emacs people are all freaks!" =) Heh, I can't argue with that :) To try to get back onto the constructive thread that I think you intended: 1) what are specific examples of great dev environment features (could be great features in otherwise ordinary environments)? 2) what are specific examples of dev environments that "put it all together" and boost productivity? -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |
From: Kent Q. <ken...@co...> - 2001-12-20 18:42:14
|
Brian Hook wrote: > But I digress (again). I'd like to see Extreme Programming as applied > to development environments/work flow, not just philosophy and the act > of typing in code. I don't want to deal with header files, precompiled > headers, and all the ugliness of separate files for interface and > implementation. I want the dependency analysis done by a custom tool. > I want the language to concentrate on structure, not details. Yadda > yadda yadda. I absolutely agree with this. I think the biggest problem with C++ is that it's tied to an obsolete linking model -- almost all its idiosyncracies arise because they didn't want to try to fix both the compilation and the linking at the same time. It's long past time that we should have to cope with broken linkers that make us be explicit about include files and dependencies. The compiler should just figure that out. We did this in MindRover's ICE language -- it's completely implicit linking. In fact, we went farther than Java did, and probably overdid it, because in our world it's hard to figure out the minimal set of components you need to ship your program to someone else! > Eiffel seems to promise a lot of this (which sparked this thread), but > I'm still suspicious of it because I've yet to see someone actually use > it then bag on it. Seems too good to be true, much like Obj-C (which, > as I've stated, just plain rules all...on OS X). > > As JC Lawrence mentioned, much of this is the argument of atomic vs. > monolithic, but in this case I'm arguing that monolithic is better > because the fundamental data has context that all the tools need to > access. Yeah, I know, convert to XML and pipe it through the command > line in a form all the tools understand. Bah. My wish is for a compiler that builds a database that the whole tool chain can use. But I don't want to be locked into a single-vendor solution, because their priorities will never be my priorities. In the ideal world, the database would be accessible through an API (preferably something like SAX so you could leverage XML knowledge) so you could write additional tools that didn't have to replicate the text processing of a compiler. In particular, the concept of explicit header files and declaring linkage & stuff is way obsolete, IMO. I think that Java is coming closer to this than most solutions...and you can find many different languages now that compile to a Java VM. Jython, for example. I like your vision of an integrated source environment, and if I had such a thing with a cross-platform target and a full set of libraries, I'd be pretty interested. But I've jumped on the flavor-of-the-month before (Icon, Smalltalk...got into C++ in 1988, which was at least 5 years too early, maybe 10...got into Java in 1995, which was again too early...) and now I'm wary of language and development platforms that don't have enough general interest to sustain a business. I might even go back and look at Smalltalk again -- there are some grownup versions of it around now. Whoever made the comment about "it's a poor workman who blames his tools" -- there's a HUGE difference between blaming your tools for poor results and wanting sharper tools. There may not be a magic bullet, but there sure as hell are more productive development environments. The question to me has always been whether that productivity on an individual workstation can translate effectively into productivity for delivery of a shippable consumer product. And just to get in my shot...to me, using emacs is like asking for a car and being told "here's a pickup truck full of spare parts -- if it doesn't meet your needs, you can take it apart and rebuild it all by yourself!" I don't have time for that much freedom. Kent -- ----------------------------------------------------------------------- Kent Quirk | MindRover: "Astonishingly creative." Game Architect | Check it out! ken...@co... | http://www.mindrover.com/ _____________________________|_________________________________________ |
From: Thatcher U. <tu...@tu...> - 2001-12-20 19:23:08
|
On Dec 20, 2001 at 01:38 -0500, Kent Quirk wrote: > > Whoever made the comment about "it's a poor workman who blames his > tools" -- there's a HUGE difference between blaming your tools for poor > results and wanting sharper tools. That was me, and I didn't mean it that way! I meant it as a criticism of the people Brian mentioned who can't do any productive work without their .emacs file. I.e. it's their fault, not emacs' fault or anyone else's. (Especially in this case, since emacs is free and easy to install...) Also, I think that adage has another important meaning, which is that a good workman will seek out effective tools; i.e. I think this is a worthwhile discussion. -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |
From: J C L. <cl...@ka...> - 2001-12-20 19:30:49
|
On Thu, 20 Dec 2001 13:38:04 -0500 Kent Quirk <ken...@co...> wrote: > And just to get in my shot...to me, using emacs is like asking for > a car and being told "here's a pickup truck full of spare parts -- > if it doesn't meet your needs, you can take it apart and rebuild > it all by yourself!" I don't have time for that much freedom. One of the things which regularly surprises me about other developers is the cavalier way they treat their development tools and environments. Specifically: They don't invest in them. Yes, they'll learn the tool, they'll figure out how to work productively with it, but they won't invest in making it closer to ideal for them with the idea that they'll then be able to carry that value forward over time, potentially over extended periods of time. Instead, all personal tool investment seems to be on a 3 year fixed depreciations schedule -- after that (or sooner) you throw it away and start over. I don't grok this. We blather on about reusable code, about maintainability, and use versioning systems on our products which often have code lifetimes (esp outside of the game arena) extending closer to decades than 3 years, BUT we rarely ever apply those same ideas and techniques to our own work environments. Why? ObNotes: Yes, I'm an XEmacs user. I've been incrementally building and adapting my .xemacs RC for almost 10 years. Its all under source control and trivial to check out or update from anywhere. It grows/changes rarely (couple times a year), but steadily improves. Its still not perfect, not by any shot, but its largely setup exactly the way I want and the curve is asymptotic to some (moving) definition of "perfect". -- J C Lawrence ---------(*) Satan, oscillate my metallic sonatas. cl...@ka... He lived as a devil, eh? http://www.kanga.nu/~claw/ Evil is a name of a foeman, as I live. |
From: Thatcher U. <tu...@tu...> - 2001-12-20 21:45:05
|
On Dec 20, 2001 at 01:38 -0500, Kent Quirk wrote: > > And just to get in my shot...to me, using emacs is like asking for a car > and being told "here's a pickup truck full of spare parts -- if it > doesn't meet your needs, you can take it apart and rebuild it all by > yourself!" I don't have time for that much freedom. That's what I used to think, until I really made a commitment to learn emacs. These days emacs knows a lot, straight out of the box. It knows about RCS and CVS, it knows how to indent and highlight various languages, how to run a build tool and bring you to the errors, how to run a source-level debugger, how to auto-complete. It's got rectangle modes, outline modes, text modes, spell checkers, yadda yadda. It can go out to the 'net and edit files via ftp. None of this stuff requires any customization; you just have to dig a bit in the docs to learn the existence of the feature, and the keybindings and/or command names. If you're on Windows, it also helps to get cygwin for some of the default supporting tools (grep, cvs, ...). I'm not really an emacs old-timer, so maybe this level of integration is recent, or maybe it's always been there. My general philosophy is to *not* fiddle with my .emacs too much; instead upgrade my wetware -- learn the default keys & commands, and only customize if I really have to. Sure, the keybindings are not ideal and were designed by a crazy person, but they work for touch-typing, and hey, I use a QWERTY keyboard too. -- Thatcher Ulrich <tu...@tu...> http://tulrich.com |