I think i saw you don't support the tools anymore, but i thought i'm gonna give it a try. sudo ddru_ntfsbitmap /dev/sdd2 domain_logfile Leads to the output ddru_ntfsbitmap 1.5 20150111 Reading boot sector... GNU ddrescue 1.23 Press Ctrl-C to interrupt ipos: 0 B, non-trimmed: 0 B, current rate: 512 B/s opos: 0 B, non-scraped: 0 B, average rate: 512 B/s non-tried: 0 B, bad-sector: 0 B, error rate: 0 B/s rescued: 512 B, bad areas: 0, run time: 0s pct rescued: 100.00%, read errors: 0, remaining time:...
Hi just wanted to close this out. I was able to use an old Dell E6420 laptop, and use Ubuntu 18.04 trial, and install ddrutility. ddru_ntfsfindbad /dev/sdc mapfile reported an error about unknown partition, but I think thats because I did not use the -i option to specify the correct partition. I was able to use ddru_findbad /dev/sdc mapfile and that was successful Scott, so thankful for your excellent tool and your help Best, DF
Just for anyone that tries this on RPI3 and Raspian: Its a 1TB partition, and my RPI3 only has 1GB RAM. I bumped Raspbian to 2G swap, but it appears to still run out of memory running ntfscluster. Inode 633741 is an extent of inode 629525. Inode 633742 is an extent of inode 629525. Inode 633749 is an extent of inode 631663. Failed to malloc 4096 bytes: Cannot allocate memory Couldn't read the data runs. With the larger memory I'm trying to run ddru_findbad /dev/sda mapfile -Q this worked, however...
Just for anyone that tries this on RPI3 and Raspian: Its a 1TB partition, and my RPI3 only has 1GB RAM. I bumped Raspbian to 2G swap, but it appears to still run out of memory running ntfscluster. Inode 633741 is an extent of inode 629525. Inode 633742 is an extent of inode 629525. Inode 633749 is an extent of inode 631663. Failed to malloc 4096 bytes: Cannot allocate memory Couldn't read the data runs. With the larger memory I'm trying to run ddru_findbad /dev/sda mapfile -Q this worked, however...
Just for anyone that tries this on RPI3 and Raspian: Its a 1TB partition, and my RPI3 only has 1GB RAM. I bumped Raspbian to 2G swap, but it appears to still run out of memory running ntfscluster. Inode 633741 is an extent of inode 629525. Inode 633742 is an extent of inode 629525. Inode 633749 is an extent of inode 631663. Failed to malloc 4096 bytes: Cannot allocate memory Couldn't read the data runs. With the larger memory I'm trying to run ddru_findbad /dev/sda mapfile -Q this worked, however...
With so few errors you could look into using ntfscluster to find what if any files are in those sectors. Here is a link for how to do that. FYI your cluster size is 4096. Ntfscluster also has a sector option which would then use 512 for a dividing number. https://radagast.ca/linux/how-to-find-the-ntfs-filename-associated-with-a-bad-block-using-linux.html https://www.unix.com/man-page/linux/8/ntfscluster/
again greatly appreciate the insight. On Rasbian, I just used apt install ddrutility. I'll play around with the makefile today and hopefully that will be it. excellent tool, and appreciate the work you put into it! Not too many errors, only 3 errors 4k in size, but last time I ran ddru_findbad on the same raspberry pi, it locked up, so I gave up.
How did you install ddrutility on Rasbian in the first place? If you followed the standard instructions, you already ran make during the installation process. You need to do that again but with a modified makefile edited as described. I do not know of any way to do anything with just the MFT and mapfile, even though that is all that is required. Your only options are try to to recompile ddrutility with the modified makefile, or use something other than a Raspberry Pi that does not have an ARM processor....
To branch this discussion, is there anyway to use the MFT and the mapfile to do the same? That way I wouldn't have to connect the USB to anything? Thx DF
The original hdd is from a dell xps. I used gddrescue from systemrescue to clone it to a USB drive on the same machine. Then I used a Raspberry PI 3 with Rasbian to run ddrutility on that same USB. Attached is the dmesg from the Rasberry PI to run ddrutility. I used apt install to get ddrutility, so I'll need to figure out how to run a make, or maybe do it on another OS? Thanks again! DF
I think I can see what is happening, but it shouldn't happen. A signed character with a value of 0xF6 is being considered positive when it should be considered a negative value by the system. It would seem your system is defaulting to an unsigned character. What OS are you using, and what is your computer architecture? I am curious about the system and why it is like this. Can you run the following command and provide me the dmesg.txt file? dmesg > dmesg.txt In the source file "makefile" you can...
Hi Maximus, Thank you so much for helping, I super appreciate it. sudo fdisk -lu /dev/sda The backup GPT table is not on the end of the device. Disk /dev/sda: 1.82 TiB, 2000398933504 bytes, 3907029167 sectors Disk model: BUP Ultra Touch Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 8BB1F33B-CBFA-4A6D-AF91-B4EABE93FF50 Device Start End Sectors Size Type /dev/sda1 2048...
Please provide the output from the following commands: sudo fdisk -lu /dev/sda sudo lsblk -f /dev/sda sudo xxd -s 659554304 -l 512 /dev/sda fdisk is to verify it is still the same disk from your post lsblk should show the partition types and verify the targeted one is ntfs xxd will provide a hex dump of the partition boot sector to verify that it is a valid ntfs boot sector The error indicates it is not a valid ntfs boot sector.
Hi I tried to run ddru_ntfsfindbad -i 659554304 /dev/sda mapfile ddru_ntfsfindbad 1.5 20150109 Reading the logfile into memory... Reading partition boot sector... ERROR! This program only allows for 8 sectors per MFT record, and this partition boot sector reports 1968 ERROR! Unable to correctly process the partition boot sector based on fdisk -lu /dev/sda Disk /dev/sda: 1.82 TiB, 2000398933504 bytes, 3907029167 sectors Disk model: BUP Ultra Touch Units: sectors of 1 * 512 = 512 bytes Sector size...
Hi I tried to run ddru_ntfsfindbad /dev/sda mapfile -i 659554304 ddru_ntfsfindbad 1.5 20150109 Reading the logfile into memory... Reading partition boot sector... ERROR! This program only allows for 8 sectors per MFT record, and this partition boot sector reports 1968 ERROR! Unable to correctly process the partition boot sector based on fdisk -lu /dev/sda Disk /dev/sda: 1.82 TiB, 2000398933504 bytes, 3907029167 sectors Disk model: BUP Ultra Touch Units: sectors of 1 * 512 = 512 bytes Sector size...
I am glad you found that you could use the older version. I no longer support ddrutility, so any issues will not be fixed. But since it is open source, maybe a fix could be found by yourself or someone else. I am sure I had information on here that stated it is no longer supported, but that information seems to be gone.
Looking back through the releases, it appears that around v2.5 there was some work to refactor the code to use shared files. I was able to successfully download v2.4 and install that, which seems to be working as a workaround.
user@debian:~/ddrutility-2.8$ make gcc -Wall -W ddru_ntfscommon.c ddru_ntfsbitmap.c -o ddru_ntfsbitmap ddru_ntfsbitmap.c: In function 'processdata': ddru_ntfsbitmap.c:1174:46: warning: '%s' directive writing up to 254 bytes into a region of size between 221 and 240 [-Wformat-overflow=] 1174 | sprintf (command, "truncate -s %lld \'%s\'", (long long)ntfs_attribute.items.Attr.NonResident.n64RealSize, destination_file); | ^~ ~~~~~~~~~~~~~~~~ ddru_ntfsbitmap.c:1174:8: note: 'sprintf' output between 17...
I have run ddrescue on a failed SSD and have about 500KB of bad sectors. It looks like ddrutility is the best toolset available for finding out which files are not working. However, I am struggling to install it. I am using Gparted live Linux, installed on a USB thumb drive, since I saw that recommended as a quick way to get a system that runs ddrescue. This doesn't come with gcc / make etc., but I was able to install those correctly, I think. I have been able to download the ddrutility tar.gz file,...
First, I am no longer actively supporting ddrutility. Second, your question is beyond the scope of the project. You now know what files are corrupt, so you can extract (copy) them to work with them. You would need to find a tool that repairs .jpg files. Chkdsk will not repair any files at all, it is only for attempting to repair (if not poorly) a corrupt file system, so that missing files could be possibly found. But it can also cause loss of file access, depending on the situation. I would only...
Hi Scott! So thanks to your awesome tool I managed to find which files are related to bad sectors of my hard drive. However, I am not sure how to proceed from here. The entries in my ntfsfindbad.log are .jpg files. How can I repair these files now that I know their location? I read about TestDisk/PhotoRec, but these tools don't seem to be suitable to target specific files. Or should I just run chkdsk on my ddrescue'd hard drive and hope that this will repair the damaged files? I read that chkdsk...
Ok, thank you very much! I will test your new project as soon as possible! Regards, Bert
There is no new version of ddrutility. My new project is HDDSuperClone, but it is not open source. I don't know for sure if ddrutility is still compatible with the latest version of ddrescue, as I don't test it anymore. But I don't have any reports that it is not compatible. Regards, Scott
Hi Maximus, first, I'm sorry for my poor English! :-) I'm a PC technician and sometimes I've been using ddrescue to recover data from failing hard drives (for the healthy ones I use Clonezilla which I find better for daily use). I confess, I never used your tool because (correct me if I'm wrong) it's mainly a forensics tool for very specific targets. But I would like to learn more about ddrutility and use it along with ddrescue in order to offer my customers a better solution. I read the wiki and...
Hi Guillaume, Congratulations on your (95%) success! I even learned something that I did not know about using cp to copy sparse files. You did a very good job of explaining things. And thank you very much for your donation :) Regards, Scott
Scott, Thank you for your tool, first, which was very useful to me! Thanks for your explanation of the difference between offset and fulloffset, it is clearer now. Thanks for your clarification of why a full disk as source is better, I was too far into the recovery with the partition alone to mess anything trying to change to full disk but after your command I was a bit afraid of loosing the MBR during the recovery, I guess I was lucky this didn't happen. I think the whole process described over...
Hello, I'm back after more than 2 months of recovery! Thanks to your wonderful tool, I could recover about 95% of the useful files without spending a few years on trying to recover the useless ones! I can confirm that the technique explained in my previous post actually works, I will add a few informational things about what to do at the end of the rescue for future people following my post. First of all, I've discovered the concept of sparse files during the recovery, I will explain a bit what it...
I would like to add there there is a case where this won't work for certain files. It is possible for some files cluster information to be too large to fit in the MFT record, and is instead located (or continued, I can't rember) in a separate inode that points to a file that contains the data. That is not supported in my software, as it was too complicated at the time and I did not need to use it for my purposes. I belive there was a reference to this someplace in this discussion area, possibly a...
Wow, that is very creative! I like the way you think! I am obviously not going through it step by step to verify it, but in theory it should work. You even delt with Excel not liking hex numbers (been there, done that, Excel sucks for hex). I did see you wondered the difference of offset and fulloffset. I believe that offset is the offset withing the partition, and fulloffset is the offset from the beginning of the disk. The reason they were the same for you is that you are imaging a partition. If...
Hello again, I think I've sucessfully selected only the files of interest in my domain as defined in the previous post and I would like to explain how I've done it : NB : I've mostly used Windows softwares because I'm more used to it, I'm guessing if you're trying to rescue a partition from a NTFS disk, so are you. The use of ddrescue and ddru_* softwares were on an Ubuntu machine, you might be able to do everything on the Linux machine if you know similar softwares to what I'll be using. NB2 : There...
Hello again, I think I've sucessfully selected only the files of interest in my domain as defined in the previous post and I would like to explain how I've done it : NB : I've mostly used Windows softwares because I'm more used to it, I'm guessing if you're trying to rescue a partition from a NTFS disk, so are you. The use of ddrescue and ddru_* softwares were on an Ubuntu machine, you might be able to do everything on the Linux machine if you know similar softwares to what I'll be using. NB2 : There...
OK, I've read the document you sent me, I pretty much understand why I didn't understand your code at first, the NTFS structure is full of tricks, with informations on where to find the information on where to find the information [...] nested in tricky parts with little endian traps ;). I learnt a log but felt completelly disarmed until I noticed that your code already exports the data I need, namely the offset and length of every chunk of data for every files in the debug log, I just had to use...
Hello Scott, Don't worry, it was clear when I posted that you no longer maintained the project, that's partly why I was adressing not only to you but to anyone who was potentially aware of the way the software was working with the MFT. Thanks a lot for the very quick reply, I wasn't expecting any activity in days or weeks. I've had a look at your new project, HDDSuperClone has you already answered another subject by mentionning it but first it isn't able to create any specific domain from the MFT...
Another hint: The bitmap itself is a file. Look at how it is found and read in the software. Possible helpful link for you to learn more about the filesystem: http://www.kes.talktalk.net/ntfs/index.html
HI Guillaume, First, I would like to make sure it is clear that I am no longer actively supporting the ddrutility project. I can tell you that what you ask it is possible in a way, as I have done it. The public software is based on private software that I wrote to recover files from a friend's drive. But it is very difficult, and my private software required multiple manual steps to do so. And while it did work for what I needed, it was still flawed. It is a big task to figure out how to properly...
Hello, First of all thanks a lot for this tool, I'm only sad that I didn't find it earlier. I've got a failing NTFS HDD on which I'm trying to retrieve important files but it has a few problems : It's erroring on a few sectors but most of the ddrescue reads are OK (that's the good part) On the other hand it is extreeeeemelly slow to read the good sectors with an average rate under 6 kB/s It is 1TB large and I've calculated it would take the rescue about 5 years to complete, it's already been rescuing...
results_list.txt has the following format: Partition /dev/loop1p1 Type FAT DeviceSector 7902576 PartitionSector 7900528 Cluster 30861 Allocated yes Inode 41183 File /DCIM/Camera/20160706_120700_HDR.jpg Partition /dev/loop1p1 Type FAT DeviceSector 7902577 PartitionSector 7900529 Cluster 30861 Allocated yes Inode 41183 File /DCIM/Camera/20160706_120700_HDR.jpg ... So it's not a logfile that could be used by ddrescue directly. If I find a solution, I probably post its here in case anyone else comes...
I don't really have any experience with the ddru_findbad tool (only the ddru_ntfsfindbad tool) and I haven't been trying to convert and findbad results into sector locations (it also looks like the output of ddru_ntfsfindbad must be different than ddru_findbad). So I'm afraid I can't be much help--hopefully someone else can step in and offer advice.
I already tried the "fill method" following the manual and got a list of broken files. Thanks anyway Davison. The goal of my attempt using ddru_findbad is to get a list of bad sectors that are not allocated at all or to figure out which sectors exactly are related to files that are unimportant (or saved in a backup). In the meantime I got it working by just using the -a option of ddru_findbad and setting the partition type to FAT manually. The output shows that all of the bad sectors are allocated....
If your goal is simply to get a list of files that are damaged by incomplete recovery and you are having difficulty with the ddru_findbad tool, you can always fall back to a more brute force method that I used before discovering the ddrutility toolset. The basics are: 1. Mount rescued image file and create a list of hashes for every file using hashdeep 2. Use ddrescue in fill-mode to fill all non-rescued areas of the image with a character string 3. Create a new list of hashes and then diff the two...
Hello everyone! I am trying to rescue a broken sd card with exFAT using ddrescue. So far only 375 kb are left in bad sectors which could not be recovered (rescue still running since almost 1 month). Hence, I tried to speed-up the rescue process by creating a domain mapfile by using ddru_findbad as I need to return the card to the retailer soon. Using ddru_findbad 1.11 on Cain 8.0 live CD on the image and logfile created by ddrescue, I received the following (extract of the relevant output): Checking...
Home
To show the deleted files in the output, change line 863 of ddru_ntfsfindbad.c from:...
To show the deleted files in the output, change line 863 of ddru_ntfsfindbad.c from:...
I believe the problem is not that the disk is too big, but that the files are likely...
I propose to change the line if (type[x] == '-' || type[x] == '/' || type[x] == '*')...
I also find it useful to optionally show deleted files in the ddru_ntfsfindbad l...
Thanks for your toolkit! Having read the topic, I have an idea: Can you please extend...
ddru_ntfsfindbad Does not show damaged files on 3Gb GPT partition It just creates...
I am not sure if all will work with BSD. I am not even sure if it will compile correctly...
Actually I just read the release notes more closely and did see a reference to you...
I saw a thread from a couple years ago where someone had submitted a patch to remove...
First, I do not see any abnormal errors in the above code. What is not working for...
Or if you can tell me what is the easiest & fastest way to install ddrutility on...
Dear Sir, I am new in ubuntu, so please forgive me for any silly future mistakes...
I've always found sourceforge harder to work with in general, and then I really didn't...
This sounds like you have a personal issue with sourceforge. When I created this...
I implore you to switch your project to Github for source control. It makes the whole...
OK I thought I was going crazy too. Thanks so much for clarifying that.
The following is from the ddrutility documentation, read the part about the data...
I'm trying to parse the output of ddru_diskutility and I'm not sure I understand...
Maybe a simpler way is to hide a controller from udev to prevent partitions recognition?...
Maybe a simpler way is to hide a controller from udev to prevent partitions recognition?...
Have already performed a simple DMA command :) So I have proof of concept (but would...
Yes, a counter of sequential abnormal events is ok, let be so :) And what about IDE...
For ddrescue patch, I think I will add a count to --mark-abnormal-error option, so...
Well, in practice ABRT can tell about different things -- it depends on a drive firmware....
That is very odd. The ata error of 04 is indeed an abort, meaning the drive basically...
That is very odd. The ata error of 04 is indeed an abort, meaning the drive basically...
Hello, I use ddrescue with your patch and that in many cases speeds up a process,...
Prototype can be found here: http://forum.hddguru.com/viewtopic.php?f=7&t=30601 or...
Wow, interesting! but that is a headache with chipsets, drivers etc? AFAIR libata...
I will look at this when I get a chance. Right now I am working on an advanced disk...
Here is an attach
Hello, I've noticed something strange. Here I'll post an attach which contains a...
The fact that I can't remember why I left it out means it must not have been a very...
I will have to look at that deeper then. I guess my documentation did not include...
First the deleted files. Hmmmm, ddru_ntfsfindbad is not supposed to process truly...
Bad(-), non-trimmed(*), and non-split/scraped(/) are supposed to be considered as...
Bad(-), non-trimmed(), and non-split/scraped(/) are supposed to be considered as...
Bad(-), non-trimmed(*), and non-split/scraped(/) are supposed to be considered as...
It could be nice to add "deleted" state of file/dir name based on flag in MFT file...
Hello, what if to take into account as good only '+' areas in logfile? Currently...
BTW if you want ddrescue (hmmm... it seems that it is about 1.19) to copy drive in...
Maximus57 thank you for your guidance and suggesting how I tackle the rest of the...
copy the image and log file Yes, very good idea. Make an extra copy of the log file...
Update: Recovery continued with ddrescue 1.17, drive in laptop and using -n. Latest...
@olegkrutov Your input in this is very welcome! I am trying to give the best advice...
The drive is a WD black drive 500GB - if that makes any difference. I have tried...
I've seen your logfile ant strongly advise you to copy your drive reversed (-R key)...
I believe it should be possible to run a second window of ddrescue like that. Just...
You have been brilliant. Thank you. I have no experience but am enjoying giving this...