#174 System- /concept-related behaviour (seems like)

v6.0.2
open
nobody
None
1
2013-06-02
2013-02-09
Uem
No

Hello...

System: Windows XP Pro with SP3

there is some strange behaviour seems to be all related to either the underlying system or maybe the original Ultradefrag design. I have observed the following for several versions of Ultradefrag, now:

  1. When defagging big volumes with plenty of files and a high fill rate Ultradefrag does not haul all non-continous data to free space at the very end of the media. In contrast it hauls a certain amount of data to the end of the media, but then starts to haul the data to the beginning and "tries" to fit all gaps towards the beginning. Alt least Ultradefrag accepts gaps and some fragmented areas, though it should have created a continuous record, due to configuration.

  2. Corresponding to the parameter refresh_interval, set in the configuration file, the Windows GUI of Ultradefrag performs the refresh of the display content. As noticable by the I/O activity LED the current defragmentation cycle is temporarily halted, right in the split second the refresh of the display happens. Right after the cycle continues a/o is replaced by a new cycle. While this oddity is happening hundreds of Thousands of times during one defragmentation process, I suspect it to have an undeniable impact on the whole duration. Is there a way to lower the outcast or at least to solve this impact by conceptional changes in Ultradefrag?

Best Regards Udo Erwin Müller

Discussion

1 2 > >> (Page 1 of 2)
  • Stefan Pendl
    Stefan Pendl
    2013-02-09

    • status: unread --> open
     
  • Stefan Pendl
    Stefan Pendl
    2013-02-09

    1. If you are running quick optimization, then this is expected behavior. This is not expected for regular defragmentation, since there is nothing moved to the back of the drive. So we would need to know what kind of process you are using.

    2. The refresh interval can be adjusted to a higher value, if you don't need as much progress updates. In addition increasing the value may result in misleading display of progress, but that doesn't have any impact on the quality of the defragmenting process.

    For some people it is very important to have the progress display start at 0% and end at 100%, which is likely to not happen if you increase the refresh interval.

    The documentation includes some hints about the performance of the three ways to process drives.


    Stefan

     
  • Uem
    Uem
    2013-02-28

    Hello Stefan

    I am sorry having mistaken full optimisation with defragmentation. But the problem still exists. Ultradefrag does not fully optimise a drive to one contiguous record (I only have 1 partition per drive!). And Ultradefrag already stops optimisation at 0,01 or 0,02% defragmentation, no matter how high the fill rate and how large the capacity. And there are various gaps left. Even Ultradefrag does not start no further cycle to compensate the above.
    I guess you might need some further specification:
    - Ultradefrag 5.1.2
    - All drives are formatted with NTFS, 512Byte cluster size.
    - My drives range from 320GByte to 2,0TByte (multiplier 1,000, not 1024)
    - The bigger the capacity, the more and the bigger the left gaps (subjective impression, no evidence)

    If in 6.0.0 rc3 this is off topic, please apologise

    Best Regards Udo Erwin Müller

     
    Last edit: Uem 2013-02-28
  • Uem
    Uem
    2013-03-04

    Hello Stefan

    Since last post I have installed 6.0.0 rc3 to check it against my former exeriences with Ultradefrag. Meanwhile a full optimisation is taking place on a 2.0TByte drive and I will post the debug log as soon als it will be available

    Best Regards Udo Erwin Müller

     
    Last edit: Uem 2013-03-04
  • Uem
    Uem
    2013-03-06

    Hello Stefan

    I am now able to provide the pending documents concerning a 2.0TB drive:
    1. the log file (without any valuable content?)
    2. the used config file
    3. a screen shot of the file system state after 2(!) complete optimisation processes
    I hope this all will be of use.

    I am currently setting up a compound for optimising a different 2.0TB drive with a simular fill rate, but using a different computer. But this will take some days till complete optimisation...

    Best Regards Udo Erwin Müller

     
    Last edit: Uem 2013-03-06
    Attachments
  • Stefan Pendl
    Stefan Pendl
    2013-03-06

    Thanks for the logs, I will investigate and report back.


    Stefan

     
  • Stefan Pendl
    Stefan Pendl
    2013-03-06

    I had a look at the used options and found the following.

    1. optimizer_file_size_threshold is set to an empty string, so it is using the default value of 20 MB as described in the comment above it.
      Use for instance optimizer_file_size_threshold = "2 TB" to process all files on disk.

    2. I don't know why all the information is missing from the debug log, may be there was a problem and the log file is located in the folder %TMP%\UltraDefrag_Logs.

    I recommend using the final release, since it has improved analysis performance.


    Stefan

     
  • Stefan Pendl
    Stefan Pendl
    2013-03-06

    It just came to my mind, that you might not have closed the GUI and that is the case for the empty log file.

    Another way to flush the log information to the file is to select Help => Debug => Open log.


    Stefan

     
  • Uem
    Uem
    2013-06-02

    Hello Stefan

    I have now downgraded Ultradefrag to 5.1.2 and using the pending scripts for optimisation . I have let Windows fragment the system drive for almost 1/4 year to provide reliable circumstances . Further results will follow

     
    Last edit: Uem 2013-06-02
1 2 > >> (Page 1 of 2)