#171 Memory leak

v6.0.3
closed
nobody
None
5
2014-10-19
2013-01-21
odinmillion
No

During the analyzing of the D: drive (2Tb total size, 500 Gb free space, 2 000 000 files) udefrag.exe proceess freezes at 100% point:
- memory usage: 800 Mb
- CPU usage: 25% (on 4-core CPU)
I waited 90 min, but nothing happened. So process didn't start to defrag HDD.

1 Attachments

Discussion

1 2 3 .. 6 > >> (Page 1 of 6)
  • Stefan Pendl
    Stefan Pendl
    2013-01-21

    Hello,

    thanks for the log file.

    I noticed that there are some files with a corrupt cluster allocation map, so it might help to first run

    CHKDSK D: /F
    

    to try to correct these.

    In addition would you mind upgrading to release 5.1.2 to make sure you use the latest stable release?

    For testing only analyzing should be sufficient, if that doesn't work, we would need a DETAILED log.

    If the analyzes works, but defragmenting fails, we would need a DETAILED log from the defragmenting session.

    In addition could you try the portable version of the first release candidate of release 6.0.0, since we improved it very much.

    Sorry for the inconvenience.


    Stefan

     
  • Stefan Pendl
    Stefan Pendl
    2013-01-21

    • Status: unread --> open
     
  • odinmillion
    odinmillion
    2013-01-21

    Thank you for fast responce!

    I tried to defrag 12 HDD. 2 Tb each. Some of them were dirty and udefrag printed: "Disk is dirty. Run CHKDSK to repair it.". It is normal situation - after executing command "chkdsk d: f/" I was able to run defrag process.

    Unfortunately I can test apps only once time a month because of 1000 km distance to isolated datacenter. So my future steps will be at next month:
    1. Try to run 5.1.2 and get LOG
    2. Try to run 6.0.0RC and get LOG

    But I still have questions:
    1. What is DETAILED log? Log in attach isn't detailed?
    2. If HDD is dirty, udefrag should say about it. Why udefrag didn't stop analyzing process in this case?

    I want to help you to improove this product!
    Thank you for good product!!

     
    Last edit: odinmillion 2013-01-21
  • Stefan Pendl
    Stefan Pendl
    2013-01-21

    1) What is DETAILED log? Log in attach isn't detailed?

    There are multiple log levels, where usually the NORMAL one is the default.
    The reporting bugs section of the handbook contains information about how to increase the log level for reporting bugs.
    The increased log level must be reverted to NORMAL after collecting the log else the performance will be less than expected.

    2) If HDD is dirty, udefrag should say about it. Why udefrag didn't stop analyzing process in this case?

    Windows has the ability to mark a drive dirty, this is what udefrag checks and reports.
    Sometimes this flag is not set by Windows, so udefrag can't report it.

    If CHKDSK doesn't report issues, it should be safe to run udefrag.

    Next month the official release will be 6.0.0 already, so testing release 5.1.2 can be skipped.


    Stefan

     
  • odinmillion
    odinmillion
    2013-01-24

    Hello! Recently I simulated conditions that lead to bug. I created 2'000'000 fragmented files x 750kb each. Then I started udefrag and selected "Analyze D:" action. Udefrag process couldn't finish analyzing process after 100% milestone. RAM and CPU usage in attach. Test desk near me so I can make any additional actions to collect some additional debug info. I want to improve udefrag and sorry for my bad English.
    UPD1. Bug appears in 5.1.2.0, 5.1.1.0, 6.0.0.0RC1.
    UPD2. Bug appears in 4.0.0.0 and 5.0.0.0
    UPD3. Standard Windows XP defrag utility generate correct analyze report.

     
    Last edit: odinmillion 2013-01-24
    Attachments
  • Stefan Pendl
    Stefan Pendl
    2013-01-24

    Thanks for the steps to duplicate the problem.

    We will do our best to get this resolved.


    Stefan

     
    • odinmillion
      odinmillion
      2013-01-24

      If you need my app that generate fragmented files on HDD I can send source code to you. Approach is simple: up to 20 threads are trying to write bytes to separated files (up to 20 files properly). When group of 20 threads finishs work next group of threads starts next work. As result after ~12-15 hours we can get 2'000'000 fragmented files (1,5Tb total))))

       
      • Stefan Pendl
        Stefan Pendl
        2013-01-24

        I don't own such a big disk, but am already preparing a smaller test disk.

        I think it is the amount of files that is causing the problem, so the size can be ignored.


        Stefan

         
        • odinmillion
          odinmillion
          2013-01-24

          Yes, 99% that you are right. When I tested udefrag at datacenter I noticed that another 2Tb HDD (500Gb free space) with 1'000'000 files have been processed correctly.

           
  • Stefan Pendl
    Stefan Pendl
    2013-01-24

    • milestone: v5.1.2 --> v6.0.0 rc 1
     
1 2 3 .. 6 > >> (Page 1 of 6)