Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.


#1191 file delete -force does not work on read-only directories

obsolete: 8.0p2
Vince Darley

OriginalBugID: 668 Bug
Version: 8.0p2
SubmitDate: '1998-07-29'
LastModified: '2000-01-20'
Severity: MED
Status: Assigned
Submitter: stanton
ChangedBy: hobbs
OS: All
OSVersion: NA
Machine: NA
FixedDate: '2000-01-20'
ClosedDate: '2000-10-25'

If a directory is not writable, file delete -force fails on the first file in the directory.

like 'rm -rf readonlydir/', the error should be expected. The docs
don't mention this, so the docs have been updated for 8.3.0.
-- 01/20/2000 hobbs
Correction - the intention is that, unlike rm, file delete -force
should delete hell-or-high-water, which means we need to adjust
the perms on a directory before nixing it in the -force case.
-- 01/20/2000 hobbs


1 2 > >> (Page 1 of 2)
  • It might not be possible to delete the file in any case e.g. if it exists in a directory to which we have no write access and where we cannot change the permissions on the file.

    • assigned_to: nobody --> vincentdarley
  • Vince Darley
    Vince Darley

    Logged In: YES

    Also if 'cwd' is inside the directory, a delete will fail
    (although we could even check for that and do a 'cd [file
    dirname $dir]')

    I assume from Jeff's comments that the idea of 'file
    delete -force' is to try everything possible to get rid of
    the file, including moving the cwd, changing permissions

  • Jeffrey Hobbs
    Jeffrey Hobbs

    Logged In: YES

    Yes, the idea would be to attempt other things to actually
    delete the directory. This was mentioned to me by Scott
    Stanton as the original intention (and thus the bug
    report), but I'm not dead set on it.

  • Logged In: YES

    Following on from Vince's comment, things become even more
    sticky when the 'cwd' of another process is inside the
    directory. At that point, you're basically stuck. (I have
    no idea if there are networked filesystems that support that
    style of directory locking, but if there are you have the
    situation where a blocking process might not even be on the
    same computer. But even without that finesse, the point
    still holds since you can't change the directory of other
    processes underneath their feet...)

    Anyway, hell-or-high-water isn't possible. Best effort
    (probably up to about the level of 'rm -rf' I'd guess) is.

  • Vince Darley
    Vince Darley

    Logged In: YES

    The changing permisions piece is very easy to implement.
    In 'DoRemoveDirectory' (tclUnixFCmd.c), we must simply add:

    if (recursive) {
    /* We should try to change permissions
    * so this can be deleted */
    chmod(path, XXX);

    could someone enlighten me to what the correct value
    of 'XXX' should be here?

  • Vince Darley
    Vince Darley

    Logged In: YES

    It would also be useful to have a new test which currently
    fails on unix, but should pass after add 'chmod' as below --
    - any takers?

  • Logged In: YES

    I'd guess the permissions to be 0700; don't care about
    losing perms for other people as the dir is about to be
    nuked anyway. Though if the delete then fails, we must
    restore the perms to what they were before since the failure
    might be due to another process owned by another user using
    that dir as a working directory and making it fail in that
    case is not too good. I think...

    The classic way to test this is (I suppose) to create a
    directory with a file in it, change its permissions to
    something aggressively unreadable, and then attempt to
    delete it. We can then force an unfixable failure by
    putting a subprocess in there (which would also be good for
    making sure that the permissions don't end up too screwed in
    the case of a failure.)

  • Vince Darley
    Vince Darley

    Logged In: YES

    Attached patch to fix this and some other problems. Please
    test on unix (and ideally add a test case...).


  • Logged In: YES

    It strikes me as odd that we have an open fd for the
    temporary file and then we immediately close it only to
    reopen it again (as part of copying.) Wouldn't it be better
    to change TclpTempFileName() to pass back both the name of
    the file and a (binary read-write) channel with which to
    access the file? One of the faults with using tmpnam() as I
    understand it is that you can have an attacker create the
    file with some nefarious content in-between getting the
    filename (of a non-existing-at-the-time file) and creating
    the file. If changed to produce both channel and filename,
    then it is obvious that the name of the function needs to
    change as well (to TclpCreateTempFile for example?)

1 2 > >> (Page 1 of 2)