Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

64-bit math support for large disks

BSzili
2014-02-26
2014-05-27
<< < 1 .. 9 10 11 12 13 14 > >> (Page 11 of 14)
  • kas1e
    kas1e
    2014-04-21

    @BSZili

    Tested. Fixed, but seems there another problem arise after that fix. Now getsize behave strangely slow (but i disabled all debug output of course, and retested twice that i disable it).

    I.e. i do reboot, run dopus5, then do getsize on some 50mb dir.

    First time it take 6 seconds and eat some memory (i can see how it eats block by block in top bar of dopus5). While, dopus4 take only 1 second for the same directory.

    Then, i do getsize again on the same directory. And now it take 15 seconds. And take more memory longer and more in compare with first test.

    Then, i do getsize 3st time , and it take 25 seconds.

    Then, i do getsize 4st time, and it take 30 seconds.

    And the more i do , the longer it take. But show size correctly always.

    Just in sake of truth i compare same dir in dopus4, and it always take 1 second.

    Can add say that previous version (with limited functions) in terms of speed was even faster than in dopus4, while of course was wrong in sizes.

     
    Last edit: kas1e 2014-04-21
  • kas1e
    kas1e
    2014-04-21

    @BSZili
    Ok, fix 993 make it works fine. Speed about the same as in dopus4 (a little slower through, but about the same).

    Another problem is that for my big fat directory i still have in dopus4: 4.835.936.840
    and in dopus5: 3.102.428.153

    I start to check directory by directory from begining. Found that "links" counts fine (for example one of directoreis show me in dopus4 on 880 bytes less size in compare with dopus5, because there is link which point out on file 880 of size, so all fine there).

    So all dirs inside looks ok. But when i getsize on all of them in whole, i have 3.102.428.153 , instead of 4.835.936.840 + sizes of links

    But in title-lister, when i go inside of that directory, it shows correctly : 4.854.846.054 (what about right, as size of dopus4 + size of links about the same). While just puts wrong in listers body in "size" field wrong.

     
    Last edit: kas1e 2014-04-21
  • BSzili
    BSzili
    2014-04-21

    @kas1e
    The small speed difference can be accounted to the extra work the Match*64 functions do. This is the price of the one size fits all design.
    I didn't change the way getsizes resolves links, so this is probably not something I broke.

     
  • kas1e
    kas1e
    2014-04-21

    @BSZili

    I didn't change the way getsizes resolves links, so this is probably not something I broke.

    Links resolves fine as far as i can see (i.e. they counts now, all ok). What is wrong is that whole getsize put in lister's size field wrong size of the directory if it 4gb+. While, when i go inside of that directory after i do getsize on it, on status line of lister size shown correctly.

     
  • xenic
    xenic
    2014-04-21

    @all
    I think the Dopus5 GetSizes recursion is failing on broken links. My AmiCygnix directory is full of links that use the Cygnix: assignment (e. g. file linked to Cygnix:bin/python). I don't set the Cygnix: assignment in my startup-sequence; I use a script that makes the assignment and starts AmiCygnix. If I perform GetSizes without the Cygnix: assignment, the GetSizes recursion seems to terminate early and I get a bad size for the directory. If I make the Cygnix: assignment and then run Dopus5 and perform GetSizes on my AmiCygnix directory, I get the correct size. kas1e, could you check the directory with bad size with GetSizes and see if it contains broken links?? If you list the directory with the "List all" command you will see a "UNRESOLVED links" in the list output.

    GetSizes speed: We only need OS4 ExamineObject() if the file is greater than 2GB. If we add a "handle" argument to the MatchFirst64/MatchNext64 functions or change the argument from AnchorPath to handle, we can compare fib->fib_NumBlocks to a certain number of blocks and just copy fib->fib_Size to fib->Size64 if fib->Numblocks is less than a certain amount. Something like this:

    if (fib->Numblocks < 4194304/(handle->dest_data_block_size/512))
    skip ExamineObject() & copy fib->fib_Size to fig->fib_Size64

    I haven't had time to figure out the exact math but it might speed things up.

     
  • xenic
    xenic
    2014-04-22

    @all
    I added renamed & modified MatchFirst64/MatchNext64 functions to the top of function_files.c for testing purposes and my suggested speedup cuts GetSizes for a 4GB+ directory from 25+ seconds to a little less than 4 seconds. Adding a FunctionHandle argument to the library Match function means we would need to add a FunctionHandle declaration to the library. It might be better to just add a ULONG argument for passing "handle->dest_data_block_size" to the functions. Either that or leave the renamed & modified functions in function_files.c since thats the only place we're using matchfirst/matchnext to get sizes.

    Attached is my modified function_files.c file which you can compile into the OS4 binary and see the speed difference. I didn't commit it because I don't know if you guys want to implement it in the library or leave it in function_files.c.

     
    Attachments
  • BSzili
    BSzili
    2014-04-22

    @xenic
    Just to clarify, I don't plan to implement anything, since I'm done with the Magellan development. If some future change breaks the AROS build, then I'm sure someone will poke me :P
    It's up to you if you want to modify my version in the library, or leave it in the function_files. One small note: you are setting fib_Size64 to fib_Size twice. Once in the #ifndef __MORPHOS__ block, and then you set it again if the size is smaller than 2G.

     
  • kas1e
    kas1e
    2014-04-22

    @Xenic

    kas1e, could you check the directory with bad size with GetSizes and see if it contains broken links?? If you list the directory with the "List all" command you will see a "UNRESOLVED links" in the list output.

    So i just going to my work:--ports/ directory (that one which are about 5gb of size and which give wrong getsize), and do there "list all" , and i didn't have any "unresolved" words in output.

    Then i do getsize on that directory in dopus5, and that what i have in debug output:

    [links.c:84 ReadSoftLinkDopus] couldn't resolve link guidep: object not found
    [links.c:84 ReadSoftLinkDopus] couldn't resolve link joydep: object not found
    [links.c:84 ReadSoftLinkDopus] couldn't resolve link threaddep: object not found
    [links.c:84 ReadSoftLinkDopus] couldn't resolve link sounddep: object not found
    [links.c:84 ReadSoftLinkDopus] couldn't resolve link gfxdep: object not found
    [links.c:84 ReadSoftLinkDopus] couldn't resolve link osdep: object not found
    [links.c:84 ReadSoftLinkDopus] couldn't resolve link machdep: object not found
    [links.c:84 ReadSoftLinkDopus] couldn't resolve link awk: object not found
    

    But then, as i say, if i go inside of the my work:--ports/ directory, then all the directories (which contain all working/nonworking links too) show correct size, and i just calculate on calculator all the sizes of the directories inside of that dir, and they in summ about 5gb. I.e. when i inside of that directory, and do getsize on all directories which placed in , then, for all of them size is correct.

    Problem is that final getsize didn't count everything together, and didn't show me those 5gb of size. While , when i just go inside of my work:--ports/ directory , mark all directories, and do getsize from button on them (so to count them all) , then size is correct of all of them. And in status bar of lister i have size about 5gb , which is right.

    Imho , getsize still fail to calculate whole summ of files/directories for final value when it more than 4gb. From another side, its getsize for sure, because our 64bit functions seems works fine (because in title of lister it shows correctly whole 5gb, when i just go inside of big directory and do getsize on all directories which are smaller than 4gb each).

    Maybe somehere just forgotted some "add" or something..

    for a 4GB+ directory from 25+ seconds to a little less than 4 seconds.
    Attached is my modified function_files.c

    Tested. Wow! Before: 17 seconds on my ~5gb directory, with your one : 3 seconds. Our bug still here of course, but speed are MUCH better. I mean really better ! Even dopus4 do it for about 4-5 seconds.

    I didn't commit it because I don't know if you guys want to implement it in the library or leave it in function_files.c.

    If it can be like this by speed and all 64bit support will works, then my bet we need to implement it to 64bit.c functions instead later, without those ifdefs. But currently , let's it be like this for now, until GetSize will not works properly. And once it will, and all will be fine in terms of 64bit, then we can add that to library.

     
    Last edit: kas1e 2014-04-22
  • xenic
    xenic
    2014-04-23

    @all
    I fixed the "broken" link bug in GetSizes and added new "match" functions that significantly increase the speed of GetSizes for OS4. However, SourceForge was down yesterday and now the repository is listed as "Locked" when I perform "svn status" & "svn update". The "svn update" fails and informs me that all files are locked and that I need to perform "svn cleanup" before I can do anything in the repository. I've never used the "cleanup" command and don't want to break the repository.

    I'm not going to commit my changes until I'm sure it's safe. Does anyone know what we should do about this problem or how to safely perform a cleanup command?

     
  • xenic
    xenic
    2014-04-23

    @BSzili

    It's up to you if you want to modify my version in the library, or leave it in the function_files. One small note: you are setting fib_Size64 to fib_Size twice. Once in the #ifndef MORPHOS block, and then you set it again if the size is smaller than 2G.

    Good catch. Removing the extra fib_Size64 setting speeds up GetSizes even more. Thanks.

     
  • kas1e
    kas1e
    2014-04-23

    @xenic
    I usually works that way : if i have any problems, i just delete fully local version of svn and do fresh/clean checkup, and then, at top of that add my last changes and commit.

    But, if you don't want to go that way, then you can try cleanup and co, because even (and if) something wrong will happens, we always can rollback to previous svn revision and all will be ok.

     
  • BSzili
    BSzili
    2014-04-23

    @xenic
    Isn't cleanup performed on your working copy? In that case it should be safe to do it, after your backed up your files of course.

     
  • xenic
    xenic
    2014-04-23

    @all
    O.K. I didn't want to break anything but I performed "cleanup" and I was able to commit my changes. Everyone will need to clean their souces and compile everything from scratch. The speedup fix involved adding an element to the FunctionHandle structure to hold the source block size. The test version of function_files.c I uploaded used the destination block size which won't work if the destination lister has a blocksize different than the source lister.

    After everyone tests my changes to be sure they are working, I plan to remove ExamineFirst64 & ExamineNext64 from the library. We don't need functions in the library that are only used in one place and for one specific purpose. In addition, the replacement functions in function_files.c have a FunctionHandle argument which would mean the huge FunctionHandle structure would need to be moved to the library and includes if the functions are in the library. In addition, I think GetSizes will be slightly faster for OS3 without the overhead of copying arguments to 68k registers for a library function call.

    I still need to fix a non-fatal Copy bug. When a directory copy is completed, an incorrect size is displayed beside the copied directory in the destination lister. BSzili placed a warning about the "high 32 bits" in the copy code so I need see if that can be fixed. It may take a couple of days because I have some other obligations too.

    The other problem we should try to fix is the OS3 crash when executing an external command from a Dopus5 button. I added a topic for that problem because I don't get any useful information for OS3 crashes in an OS4 Grim Reaper stack trace. If anyone can find out what function is failing on another platform (MOS, AROS or OS3 classic Amiga), please post it in the topic I started for that issue. Otherwise, it could take a long time to track down the problem.

     
  • xenic
    xenic
    2014-04-24

    @kas1e
    I hate to disappoint you but I did some more testing and in addition to the Copy issue I mentioned in my last post, copying links is totally borked. Reading of soft link directories in GetSizes is also now screwed up. I compiled several older backup copies of DOpus5 source and found that as of April 14 (rev 982?) link reading in GetSizes works correctly and Copy of links works correctly. So far I can't tell what went wrong and can't fix the problems. I'll keep looking.

     
  • kas1e
    kas1e
    2014-04-24

    @Xenic
    I do check your last commit, and getsize works now ! It works also the same fast as in dopus4, yeah ! On my ~5gb partition it's a veeery little slower that dopus4 if not the same. For example, after pure reboot, when no index of directory happens, dopus4 do getsize on my ~5gb parition for about 24 seconds, but dopus5 do same after reboot in 26 seconds. But that ok, as dopus4 don't count links, while dopus5 is. So, speed the same. And after index is done (i.e. once i do getsize once after reboot), the same getsize in dopus4 take 4 seconds, and in dopus5 5 seconds, but as links involved that mean its all the same. Maybe that a little of debug output which we still have also cause a bit of differences, so in end of all we can say that speed the same as in dopus4, and that is good.

    Probably, at some stage later (if we will even worry about), we can or add ENVarc which will mean "count or not count links" (i.e. or make them as it now, or , as in dopus4). Or, it even can be as option in config.module, somewhere in the Miscellaeous:Count Links ON/OFF. But of course as it now all is fine too.

    As for "os3 crash when executing an external command" : i found the roots, check plz answer in relevant topic.

     
    Last edit: kas1e 2014-04-24
  • xenic
    xenic
    2014-04-24

    @kas1e
    Even though OS4 version appears to be working fine, there are problems with directory links as I mentioned in my previous post. I have a test directory on an FFS partition that has hard and soft file links and directory links and produces GetSize failures (wrong size) and Copy problems. I need to investigate the problem. Please give me some time to examine the problems; as you know I'm slow.

     
  • xenic
    xenic
    2014-04-25

    @kas1e
    You found that debug output was causing crashes in OS3 but now I'm having debug problems with OS4. Directory links are causing Dopus5 to get wrong directory sizes. When I put a bunch of debug statements in function_files.c, GetSizes stops showing any size in the lister. When I diff the file with debug statements against the non-debug version all I see are added debug statements. It just doesn't make sense. Try compiling OS4 version with the attached file instead of the repository version and see if GetSizes stops working for you too. Maybe it's a problem with my computer; don't know. It's just making it nearly impossible to find a fix for directory links.

     
  • kas1e
    kas1e
    2014-04-26

    @Xenic
    I commented out our debugprinfs for os3 which cause crashes in repo and commit, as well as tried your attached function_files.c on latest svn: when i do getsize , prinfs happens, but nothing shown as you say.

    So then step by step via disabling one prinfs by one i found what is wrong: in the function_end_entry() you have:

    }
    // Entry we must enter
    else
    D(bug("Entry we must enter\n"))
    if (entry->flags&FUNCENTF_ENTERED && deselect)
    {
    

    I.e.

    }
    else
    // if put anything here, next "if" will fail , as original coders don't put necessary { } to avoid such problems.
    if
    {
    

    So to make your debug prinfs there work and not to skip next "if", you need add { after else, and of course one more } at end of function, to make "else" be ok and in those { }.

    To say truth, i meet with such shit from time to time, and i do not know why amiga programmers (or all programmers?) from past, avoid usage of { } things (probably they think that its cool when code looks without { }, dunno). Having properly designed else/if loops always good for any other coders who will later works with.

    So, my bet : we should add { } there and commit it to repo. All those "else" without { } sucks for sure. If i remember right dopus5 have it in few more places, just we was lucky as have needs before to put code in such places. Adding { } in all palces will just make and code looks better, and in end it didn't possible to have such problems with which you meet.

    I will wait with commit for that , till you will not say that you ok with (or, maybe you can commit it yourself as you anyway works on that file currently) ?

     
    Last edit: kas1e 2014-04-26
  • xenic
    xenic
    2014-04-26

    @kas1e
    No. I don't want to commit those debug statements to the repository. They will slow down the GetSizes directory scan. I just need temporarily to see how the scan is working. Thanks for finding the problem because I think I now see what is happening in the code.

    Unfortunately, we will probably need to have Getsizes skip directory links. The reason we are having problems in OS4 Dopus5 is because OS4 "Lock()" is automatically resolving the directory links and entering the linked sub-directory without the ability to return to the correct directory. That's not happening on other platforms because they can't enter a directory link without calling "ReadLink()" to resolve a directory link and are either skipping the directory links or failing like our broken links in big directories did before I added a workaround.

    The bottom line is that non-OS4 platforms cannot enter a directory link and OS4 enters them and gets lost. The only solution that I can see is to apply the same workaround to directory softlinks as I did for broken softlinks. That should avoid directory link problems on all platforms. However, the file sizes in directory links will not be counted by GetSizes.

    If I didn't make the directory softlink issue clear in the above explanation, let me know and I'll try again. If you want a release soon, then I think I should just add a workaround to skip the directory softlinks for now and see if I can fix the minor destination lister size display problem when copying a directory.

    Let me know what you think.

     
  • kas1e
    kas1e
    2014-04-26

    @Xenic

    No. I don't want to commit those debug statements to the repository. They will slow down the GetSizes directory scan. I just need temporarily to see how the scan is working. Thanks for finding the problem because I think I now see what is happening in the code.

    I am of course not about commited debug, but about commit those 2 { } after else which is not in code now, to avoid anyone having problems when any other code which can be added in such if/else constucts will fail. I mean as it now, just else and then if, without { } , its just make code looks bad imho, and having { } is right. I.e. even without your debug prinfs, after else should { come, and one more } at end, to avoid code looks like it looks now and to avoid even having problems we have. And in dopus5 there is few another places where original authors do not put { } , while imho they should be everywhere, so code will looks cleaner.

    Let me know what you think.

    I think we can go 2 ways:

    1. Still trying to fix that problems with dir-links. Maybe we need ask on hyperion's forum what solutions we have for , or i may ask directly Colin for help, maybe he have some ideas about, etc. I.e. exactly how to deal with "OS4 "Lock()" is automatically resolving the directory links and entering the linked sub-directory without the ability to return to the correct directory".

    2. We can go more or less easy way: as you say previously its now easy to disable counting of links without problems. So, we can just disable it all the same as done in dopus4: not count links at all by getsize (but then i do not know, if dopus4 copy file/dir links ?).

     
    Last edit: kas1e 2014-04-26
  • xenic
    xenic
    2014-04-26

    @kas1e
    The information in my previous post only applied to the current code. Unfortunately, the copy command is also broken when it comes to links. I did some more testing and here is the situation as it stands now:

    The original SASC version works with links for getsizes and copy (32 bit).

    The first 64bit version of Dopus5 from mid-April that used the FileInfoBlock64 with 64 bit sizes in FileInfoBlock fib_Reserved element worked for links in GetSizes and Copy. You talked to Colin Wenzel who said it wouldn't work and you got BSzili to change FileInfoBlock64. Now GetSizes and Copy don't work with links.

    The current code is broken in GetSizes and Copy because of links and BSzili is gone. I can't fix the current code so we're left with a bad 64 bit update. If we had left the code the way BSzili did it originally, I think DOpus5 would be O.K. I don't know what to do so I'm stopping for now.

     
  • kas1e
    kas1e
    2014-04-26

    @Xenic
    But BSzili explain that we can't have it as it was before: because colin says it destroed the context if we use it like this (and it was true : for me it didn't works normally on my setup, but after new changes it start to works).

    If we had left the code the way BSzili did it originally, I think DOpus5 would be O.K.

    But then it will not works on some setup because context of importan strucutres will be trashed (as it was for me, when some parts just didn't works in lister and produce eating of memory in loop). I.e. new way is way to go for sure, just something wrong somewhere. In theory does not matter where we store our content, it just works if didn't trash stuff. And that it works previosly but stop works now, mean something wrong elsewhere. If it works for you when he store data in reservers, but not when store happens in another field of structure , then its magic ,as it all should be same for links. If of course, some typo or another code wasn't added. Maybe some additional check in the new 64bit functions make problems or whatever.

    I don't know what to do so I'm stopping for now.

    Let's then chill from dopus5 for now, and wait when our motivation back. Fresh look will be always better.

     
    Last edit: kas1e 2014-04-26
  • xenic
    xenic
    2014-04-26

    @kas1e
    It's hard to tell what went wrong. I'm going to "chill" as you say and then use the code that worked and try adding changes one at a time and testing. Hopefully I can determine where we went wrong. You're right; no need to panic. We just need to work forward from where the code was good. I'll let you know if I find the problem and if it can be fixed.

     
    Last edit: xenic 2014-04-26
  • xenic
    xenic
    2014-04-26

    @kas1e
    It's somewhat embarrassing but after ranting about a lot of other things going wrong, I discovered that the problem was the fix for broken links in large directories. That "fix" broke the processing of good links so I removed the bad fix and committed the changes. That still leaves us with a problem with broken soft links. I haven't found another solution for that yet.

     
  • kas1e
    kas1e
    2014-04-29

    @Xenic
    I see you fix function_copy.c file to make destination lister works with copy an stuff, cool. Through i fix few warnings (should be (FileInfoBlock64 *) added) , as well as make it be around ifdef USE_64BIT as done everywhere.

    And, i start to think that probably we need to change ALL examine (at least) calls to the 64bit too. For first we will get rid from that "deprecated" stuff, and for second, i found another bug : just do create_dir : size of fresh empty dir will be wrong and borked. So, i fix it by adding to function_makedir.c same replace from examine() to examinelock64() in ifdef_64bit block. Commit #1003.

    Maybe we need to change all examine calls everywhere , but from another side we can go that way as we go now (step by step , and only when problems arise), because by this way we will know at least where is those 64bit changes are necessary to make it all works (and later will be easy for anyone to check ifdef64bit blocks, to see what only important was to fix).

     
    Last edit: kas1e 2014-04-29
<< < 1 .. 9 10 11 12 13 14 > >> (Page 11 of 14)