#92 cssclean corrupts hashfile

daemon (84)
Enrico Scholz

cssclean.c makes

| hash_rec_max = strtol(READ_ATTRIB("HashRecMax"), NULL, 0);
| if (_hash_drv_open(filename, &old, 0, max_seek,
| max_extents, extent_size, pctincrease, flags))
| ...
| if (_hash_drv_open(newfile, &new, hash_rec_max, max_seek,
| max_extents, extent_size, pctincrease, flags)) {
| ...
| /* preserve counters */
| memcpy(new.header, old.header, sizeof(*new.header));

This causes errors (e.g. a floating point exception due to division
by zero) or data corruption when the old file has a hash-rec size
(old.header.hash_rec_max) which is different from the configured one.

Such a situation happens e.g. when configuration has been changed or
'csscompress' has been run over the file.

Then, the hash-driver will work with 'hash_rec_max' some time but
when closing the file, the hash_rec_max value of the old file will be
written and subsequent operate on wrong hash divider and calculate
wrong offset of next extent.


  • bofh999

    Many thanks

    Since i switched to sbph i had to use the hashdriver
    however i got major issues at first because the cssclean didnt worked in the current version of zimbra

    after adding the git version to zimbra i started to use cssclean and of course csscompress - argh bad mistake

    shot my database now 4 times to hell - well many thanks youve discovered that one

    so it means it should be fine if we not use csscompress?
    whats the downsides? how big can it grow?
    any safe way todo the compress at somepoint?