Thread: [sleuthkit-developers] Re: [sleuthkit-users] Split Image Question
Brought to you by:
carrier
From: Brian C. <ca...@sl...> - 2005-02-01 16:33:45
|
Ok, globbing it is. I was under the impression that globbing did not necessarily return in sorted order, but the documentation says that it does. There seem to be two "issues" at this point: 1. I feel that there should be a maximum command line size, but cannot seem to find one. I just ran some tests that had over 60KB of data in arguments (390 file names each over 160 bytes long) and there were no problems. This was on OS X, so I'm not sure if other OSes are different. Anyone know? 2. There is a maximum number of files that a process may have open. On OS X, it seems to be 253. I am not opening the file until it is needed, but if you run 'dls' on the file system then all images will be needed. I would rather not develop some form of caching system with book keeping and a FIFO scheme. Do people have over 250 split images? That is 160 GB for CD images and over 500GB for 2 GB images. (I can't imagine anyone burning 250 CDs). In the future, this limitation will probably have to be removed, but I'm assuming that it will be fine for the first version. brian |
From: Seth A. <sa...@im...> - 2005-02-02 21:41:26
|
On Tue, Feb 01, 2005 at 11:38:23AM -0500, Brian Carrier wrote: > >1. I feel that there should be a maximum command line size, but=20 > >cannot seem to find one. I just ran some tests that had over 60KB of=20 > >data in arguments (390 file names each over 160 bytes long) and there=20 > >were no problems. This was on OS X, so I'm not sure if other OSes are= =20 > >different. Anyone know? >=20 > Actually, I guess this should be shell dependent and not OS dependent. = =20 > I am using: I don't expect any shells would implement the limit on their own. Some experiementation shows that it is around 128k on my SuSE 9.2 system with a 2.6 kernel and on my Debian 3.0 system with a 2.2 kernel. (The exact limit varies based on the environment as well as command line arguments.) I've heard that some Unix systems have limits of one megabyte! Others (such as an old SCO 3.2.4.2 system I used to use) have much lower limits, perhaps 8k or 16k or so.. I've since reclaimed those brain cells. For 'numbered files' support, I've found that there are -two- popular ways to show lists: foo.1 foo.2 foo.3 ... foo.10 foo.11 ... and foo.001 foo.002 ... foo.010 ... I've found that the first style can be easily handled with seq(1): for f in `seq 1 100` ; do echo foo.${f} ; done The second style is only slightly harder: for f in `seq 1 100` ; do F=3D`printf %03d $f` ; echo foo.${F} ; done Analogues in perl are left as an exercise for the reader. :) |
From: David C. <dav...@gm...> - 2005-02-03 07:26:10
|
seq -w will solve that problem a little more elegantly :) On Wed, 2 Feb 2005 13:41:58 -0800, Seth Arnold <sa...@im...> wrote: > On Tue, Feb 01, 2005 at 11:38:23AM -0500, Brian Carrier wrote: > > >1. I feel that there should be a maximum command line size, but > > >cannot seem to find one. I just ran some tests that had over 60KB of > > >data in arguments (390 file names each over 160 bytes long) and there > > >were no problems. This was on OS X, so I'm not sure if other OSes are > > >different. Anyone know? > > > > Actually, I guess this should be shell dependent and not OS dependent. > > I am using: > > I don't expect any shells would implement the limit on their own. Some > experiementation shows that it is around 128k on my SuSE 9.2 system with > a 2.6 kernel and on my Debian 3.0 system with a 2.2 kernel. > > (The exact limit varies based on the environment as well as command line > arguments.) > > I've heard that some Unix systems have limits of one megabyte! Others > (such as an old SCO 3.2.4.2 system I used to use) have much lower limits, > perhaps 8k or 16k or so.. I've since reclaimed those brain cells. > > For 'numbered files' support, I've found that there are -two- popular > ways to show lists: > foo.1 > foo.2 > foo.3 > ... > foo.10 > foo.11 > ... > > and > foo.001 > foo.002 > ... > foo.010 > ... > > I've found that the first style can be easily handled with seq(1): > for f in `seq 1 100` ; do echo foo.${f} ; done > The second style is only slightly harder: > for f in `seq 1 100` ; do F=`printf %03d $f` ; echo foo.${F} ; done > > Analogues in perl are left as an exercise for the reader. :) > > > |
From: Brian C. <ca...@sl...> - 2005-02-01 16:38:33
|
On Feb 1, 2005, at 11:33 AM, Brian Carrier wrote: > 1. I feel that there should be a maximum command line size, but > cannot seem to find one. I just ran some tests that had over 60KB of > data in arguments (390 file names each over 160 bytes long) and there > were no problems. This was on OS X, so I'm not sure if other OSes are > different. Anyone know? Actually, I guess this should be shell dependent and not OS dependent. I am using: % bash --version bash --version GNU bash, version 2.05b.0(1)-release (powerpc-apple-darwin7.0) Copyright (C) 2002 Free Software Foundation, Inc. brian |