Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

#171 Memory leak

v6.0.3
closed
nobody
None
5
2014-10-19
2013-01-21
odinmillion
No

During the analyzing of the D: drive (2Tb total size, 500 Gb free space, 2 000 000 files) udefrag.exe proceess freezes at 100% point:
- memory usage: 800 Mb
- CPU usage: 25% (on 4-core CPU)
I waited 90 min, but nothing happened. So process didn't start to defrag HDD.

1 Attachments

Discussion

1 2 3 > >> (Page 1 of 3)
  • Stefan Pendl
    Stefan Pendl
    2013-01-21

    Hello,

    thanks for the log file.

    I noticed that there are some files with a corrupt cluster allocation map, so it might help to first run

    CHKDSK D: /F
    

    to try to correct these.

    In addition would you mind upgrading to release 5.1.2 to make sure you use the latest stable release?

    For testing only analyzing should be sufficient, if that doesn't work, we would need a DETAILED log.

    If the analyzes works, but defragmenting fails, we would need a DETAILED log from the defragmenting session.

    In addition could you try the portable version of the first release candidate of release 6.0.0, since we improved it very much.

    Sorry for the inconvenience.


    Stefan

     
  • Stefan Pendl
    Stefan Pendl
    2013-01-21

    • Status: unread --> open
     
  • odinmillion
    odinmillion
    2013-01-21

    Thank you for fast responce!

    I tried to defrag 12 HDD. 2 Tb each. Some of them were dirty and udefrag printed: "Disk is dirty. Run CHKDSK to repair it.". It is normal situation - after executing command "chkdsk d: f/" I was able to run defrag process.

    Unfortunately I can test apps only once time a month because of 1000 km distance to isolated datacenter. So my future steps will be at next month:
    1. Try to run 5.1.2 and get LOG
    2. Try to run 6.0.0RC and get LOG

    But I still have questions:
    1. What is DETAILED log? Log in attach isn't detailed?
    2. If HDD is dirty, udefrag should say about it. Why udefrag didn't stop analyzing process in this case?

    I want to help you to improove this product!
    Thank you for good product!!

     
    Last edit: odinmillion 2013-01-21
  • Stefan Pendl
    Stefan Pendl
    2013-01-21

    1) What is DETAILED log? Log in attach isn't detailed?

    There are multiple log levels, where usually the NORMAL one is the default.
    The reporting bugs section of the handbook contains information about how to increase the log level for reporting bugs.
    The increased log level must be reverted to NORMAL after collecting the log else the performance will be less than expected.

    2) If HDD is dirty, udefrag should say about it. Why udefrag didn't stop analyzing process in this case?

    Windows has the ability to mark a drive dirty, this is what udefrag checks and reports.
    Sometimes this flag is not set by Windows, so udefrag can't report it.

    If CHKDSK doesn't report issues, it should be safe to run udefrag.

    Next month the official release will be 6.0.0 already, so testing release 5.1.2 can be skipped.


    Stefan

     
  • odinmillion
    odinmillion
    2013-01-24

    Hello! Recently I simulated conditions that lead to bug. I created 2'000'000 fragmented files x 750kb each. Then I started udefrag and selected "Analyze D:" action. Udefrag process couldn't finish analyzing process after 100% milestone. RAM and CPU usage in attach. Test desk near me so I can make any additional actions to collect some additional debug info. I want to improve udefrag and sorry for my bad English.
    UPD1. Bug appears in 5.1.2.0, 5.1.1.0, 6.0.0.0RC1.
    UPD2. Bug appears in 4.0.0.0 and 5.0.0.0
    UPD3. Standard Windows XP defrag utility generate correct analyze report.

     
    Last edit: odinmillion 2013-01-24
    Attachments
  • Stefan Pendl
    Stefan Pendl
    2013-01-24

    Thanks for the steps to duplicate the problem.

    We will do our best to get this resolved.


    Stefan

     
    • odinmillion
      odinmillion
      2013-01-24

      If you need my app that generate fragmented files on HDD I can send source code to you. Approach is simple: up to 20 threads are trying to write bytes to separated files (up to 20 files properly). When group of 20 threads finishs work next group of threads starts next work. As result after ~12-15 hours we can get 2'000'000 fragmented files (1,5Tb total))))

       
      • Stefan Pendl
        Stefan Pendl
        2013-01-24

        I don't own such a big disk, but am already preparing a smaller test disk.

        I think it is the amount of files that is causing the problem, so the size can be ignored.


        Stefan

         
        • odinmillion
          odinmillion
          2013-01-24

          Yes, 99% that you are right. When I tested udefrag at datacenter I noticed that another 2Tb HDD (500Gb free space) with 1'000'000 files have been processed correctly.

           
  • Stefan Pendl
    Stefan Pendl
    2013-01-24

    • milestone: v5.1.2 --> v6.0.0 rc 1
     
  • odinmillion
    odinmillion
    2013-01-28

    Hello again! Have you got some positive news?
    Please feel free to contact me if you need additional information about this bug.

     
  • Stefan Pendl
    Stefan Pendl
    2013-01-29

    Sorry, no.

    Creating 2,000,000 fragmented files is taking too long on my system.

    Could you be so kind and create a more verbose debugging log file?

    1. download DbgView form the SysInternals page
    2. start it as Administrator and capture win32 and global win32
    3. run UltraDefrag with the PARANOID debugging level
    4. run an analysis
    5. end UltraDefrag
    6. save the DbgView display to a file, ZIP it and attach it

    This should give us a better hint about where the problem occurs.
    DbgView is needed to get a complete log file due to UltraDefrag hanging.


    Stefan

     
    • odinmillion
      odinmillion
      2013-01-30

      Thank you for your interesting to this bug!
      I've done test twice. Each folder for each test. In second test I checked all possible checkboxes in DbgView options)
      And remember - "please feel free to contact me if you need additional information about this bug"!!! I want to help you!

       
      Last edit: odinmillion 2013-01-30
      Attachments
      • Stefan Pendl
        Stefan Pendl
        2013-01-30

        Thanks for the logs, they have already been of help.


        Stefan

         
  • Stefan Pendl
    Stefan Pendl
    2013-01-29

    I would like you to try one more thing, which is running the udefrag command line utility and check if that hangs too.


    Stefan

     
    • odinmillion
      odinmillion
      2013-01-30

      Hello! Command line utility hangs too. I am surprised that I was able to collect logs only from GUI version (via special item in menu). That is why I used GUI tool in the sequel.

       
      • Stefan Pendl
        Stefan Pendl
        2013-01-30

        The log creation is done in a separate thread, so the program only seems to hang, but is actually trapped in a long action.

        Thanks for the test of the command line utility.
        Could you attach the PARANOID log as described in the post above too?


        Stefan

         
        • odinmillion
          odinmillion
          2013-01-30

          Of course! Please see my answer with attach above.

           
  • Stefan Pendl
    Stefan Pendl
    2013-02-02

    We have investigated this problem and found that there might be a problem with memory allocation failing.

    We haven't found a quick fix to handle out of memory conditions gracefully for v6, so we terminate the entire process on out of memory conditions to avoid the system hanging especially at boot.

    If your files are organized into separate folders, then it would be possible to process one folder after the other to defragment the drive.
    If this is not the case we can work out a solution together, so you can successfully process the drives in your data-center.

    Sorry for the inconvenience.


    Stefan

     
    • odinmillion
      odinmillion
      2013-02-05

      Thank you. I have got 2 questions:
      1. I write software using c# + js + html + xml etc. Not C++. I want to help fix bug but I dont know what should I do. Please tell me.
      2. When approximately do you plan to correct the error?

       
  • Stefan Pendl
    Stefan Pendl
    2013-02-05

    v6 is written entirely in ANSI C, so we don't even use any kind of C++ feature.

    The processing code must run in the native stage of Windows at boot, so there is no way to port this to a higher language.


    We are still investigating further, so if you don't mind could you send us a debug log of the next v6 release, since this should allow us to further narrow down the part that is running out of memory.


    Stefan

     
    Last edit: Stefan Pendl 2013-02-05
    • odinmillion
      odinmillion
      2013-02-06

      Hello! Logs in attach. Please give me feedback after analysing. Thank you!

       
      Last edit: odinmillion 2013-02-06
      Attachments
      • Stefan Pendl
        Stefan Pendl
        2013-02-06

        I have had a quick look at the log and it seems that you have been a bit too inpatient, no offense intended.

        The part of the analysis that would result in an out of memory condition was started at 15:45 and the program was terminated at 15:49

        If possible could you wait till the program terminates on its own?
        This would allow us to make sure it is an out of memory condition.

        BTW, a normal debug log should be sufficient if you use the DbgView program.

        Thanks in advance.


        Stefan

         
        • odinmillion
          odinmillion
          2013-02-07

          No problem. I can wait more time.

           
          • odinmillion
            odinmillion
            2013-02-07

            Hello! I waited 40min but udefrag process was still working. 800Mb ram + 100% core usage.

             
            Last edit: odinmillion 2013-02-07
            Attachments
1 2 3 > >> (Page 1 of 3)