Thread: [sleuthkit-users] (no subject)
Brought to you by:
carrier
From: Brent D. <bre...@te...> - 2002-07-23 21:40:35
|
Hello, (primarily Brian since no one else is signed up yet probably) I'm thinking about the fundamental way I'm going about doing a keyword search. I'm using autopsy/task to do the searches (well - the same commands). I'm getting the strings output of the entire image with decimal offsets (strings -a -t d <image>). This is on a large image with most of the image being free space. It's fat. I'm using the resulting strings file to do searches against for keywords. My question being: What if a keyword fell across a cluster boundary? Example: I'm searching for "Forensics Investigator" and it just so happens that "Forensics" is on a different cluster than "Investigator" - the current method would not catch this. First - should I even worry about this? Second - I could make my search strings redundant (Have a "Forensics Investigator" and a "Investigator" or "Investigat" or something). Third - the surefire method - mount the image read-only, recurse through, and strings each file - recover deleted files and strings each of them as well. Thoughts? Brent Deterding GSEC, GCFW, GCIA, GCIH, RHCE Security Engineer TechGuard Security E-Mail: bre...@te... Phone: (636) 519-4848 "NOTE: EMAIL IS NOT NECESSARILY SECURE" NOTICE: This communication may contain privileged or other confidential information. If you are not the intended recipient or believe that you may have received this communication in error, please reply to the sender indicating that fact and delete the copy you have received. In addition, you should not print, copy, retransmit, disseminate or otherwise use the information." |
From: Carlton F. <c.a...@LA...> - 2006-03-27 15:14:34
|
I was asked to create an image of a system a couple of weeks ago but told not to investigate it. I used dcfldd over netcat on a crossover cable to image the system. I created MD5's of the source and image, and both matched. I did a physical image, not logical. Today, I have been asked to investigate the image. However, the partition table appears bad. I am getting warnings from fdisk saying Partition 1 has different logical/physical endings. Then Partition 2 has different beginnings and endings. I can't figure out how to get the logical images extracted, and we no longer have access to the source system. Can anyone provide any help? -- |
From: <gim...@we...> - 2006-03-28 13:00:46
|
On Mon, 27 Mar 2006 10:14:23 -0500 Carlton Foster <c.a...@LA...> wrote: > I was asked to create an image of a system a couple of weeks ago but > told not to investigate it. I used dcfldd over netcat on a crossover > cable to image the system. I created MD5's of the source and image, > and both matched. > > I did a physical image, not logical. > > Today, I have been asked to investigate the image. However, the > partition table appears bad. > > I am getting warnings from fdisk saying Partition 1 has different > logical/physical endings. Then Partition 2 has different beginnings > and endings. I can't figure out how to get the logical images > extracted, and we no longer have access to the source system. > > Can anyone provide any help? > -- Try out this one: http://www.cgsecurity.org/wiki/TestDisk From the summary: "If you have missing partitions or a completely empty Partition Table, TestDisk can search for partitions and create a new Table or even a new MBR if necessary." regards |
From: <lh...@li...> - 2006-12-06 07:51:42
|
Where can I find the INSTALL document referred to in README in = sleuthkit-win32-2.06r2. =20 |
From: Brian C. <ca...@sl...> - 2006-12-06 22:41:14
|
The README in the windows zip file is from the Unix version. The =20 README-win32.txt is the Windows-specific one. There is no INSTALL file in the Windows version since it simply =20 contains the executables. There is nothing to install. brian On Dec 6, 2006, at 2:51 AM, Lars H=E5kansson wrote: > Where can I find the INSTALL document referred to in README in =20 > sleuthkit-win32-2.06r2. > > > > ----------------------------------------------------------------------=20= > --- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to =20 > share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?=20 > page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV____________________________= ____=20 > _______________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: yong J. <yo...@ya...> - 2007-05-25 08:06:26
|
Hi, . I want to remove HPA area under Kernel-2.6.18. I am using SATA 250GB includes HPA, 11GB. I used HDPARM7.3 to get some information. But there's nothing information related to my SATA 250GB. There is a device size, 239GB, but that is not included HPA area. Could you please tell me which program or commands to use in order to remove HPA area under Linux ? ....................... [root@localhost hdparm-7.3]# ./hdparm -I /dev/sda /dev/sda: ATA device, with non-removable media Model Number: SAMSUNG SP2504C Serial Number: S0H9J1QP400136 Firmware Revision: VT100-50 Standards: Used: ATA/ATAPI-7 T13 1532D revision 4a Supported: 7 6 5 4 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 467348955 device size with M = 1024*1024: 228197 MBytes device size with M = 1000*1000: 239282 MBytes (239 GB) Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Recommended acoustic management value: 254, current value: 128 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5 udma6 udma7 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE * SET_MAX security extension * Automatic Acoustic Management feature set * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test * General Purpose Logging feature set * Segmented DOWNLOAD_MICROCODE * SATA-I signaling speed (1.5Gb/s) * SATA-II signaling speed (3.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters DMA Setup Auto-Activate optimization Device-initiated interface power management * Software settings preservation * SMART Command Transport (SCT) feature set * SCT Long Sector Access (AC1) * SCT LBA Segment Access (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) Security: Master password revision code = 65534 supported not enabled not locked frozen not expired: security count supported: enhanced erase 88min for SECURITY ERASE UNIT. 88min for ENHANCED SECURITY ERASE UNIT. Checksum: correct [root@localhost hdparm-7.3]# . Friendly, Nobel ________________________________________________________ 180도 달라진 야후! 메일 - 신세대 메일? 이제 새로운 야후! 메일에서 만나보세요. http://kr.content.mail.yahoo.com/cgland |
From: Lehr, J. <jl...@sl...> - 2009-11-05 00:44:09
|
Hi All, I've been working on a faster way to map unallocated blocks extracted with blkls to disk blocks in the partition. The TSK tool blkcalc can do this, but it is quite slow as the man page denotes. I figured with some simple math and a little metadata, I could do it faster manually. Here's what I did: 1) extracted unallocated ascii and unicode strings with blkls with something like: $ blkls partition.dd | (tee >/dev/null >(strings -td) >(strings -td -el))> partition.strings 2) grepped partition strings for "search term" and found it at byte offset #####. Decided I needed to look at original block. 3) created a list of unallocated blocks with: $ blkls -l partition.dd > unalloc.block.list 4) determined the block offset of my "search term" by dividing the byte offset by the file system block size determined through fsstat. $ echo $((#####/4096))) ## 5) determined the block address of unallocated block in the partition by reading line ## of unallocated.block.list (the tail command removes the three lines of header from the blkls -l output) $ tail -n +4 unallocated.block.list | cat -n | grep -m1 ^## #####|f where "#####" is the block id of the unallocated block in the partition (almost, see below). With exception to making the strings file and the block list, this process takes only a minute or so to complete on a 250gb partition with 137gb of unallocated space. And, it can be scripted, of course. That said, what I have found is that the block address retrieved from step 5 is short by 1 block. Easy to compensate for, but I could use some help understanding why. Anybody have an explanation? Thanks, John ______________________________ John Lehr Evidence Technician San Luis Obispo Police Department ______________________________ |
From: vattini g. <ha...@ya...> - 2010-07-29 01:57:11
|
Hi all,i'm arguing to this install, let's say problem as a point of about the future. Brian i will give an interpretation about the sense to your point, sleuthkit, as a forencisc tool need to be completed as a de facto tool. Whats means?, it means that when someone as a counterpart he could point that this tool, is not under the standard , blah blah blah. My opinion is, all that is a Dogma, there are not perfect tools where we can analyze tests, where imagination errors and skills and fortune, as being proben to be more effective than something else. So i will apologize to all of you sayng, a bit of balance is not deprecable ciao Thank you very much about your astonish work will be there in the future any implementation of zfs filesystem? On Jul 27, 2010, at 1:06 PM, Simson Garfinkel wrote: > The current version of SleuthKit doesn't work with the current version of libewf. |
From: kok p. c. <cha...@ya...> - 2011-11-28 01:21:21
|
<a name="roclfrptfc" title="" href="http://wordpress.chalegroup.org/wp-content/plugins/extended-comment-options/fjsh.htm">http://wordpress.chalegroup.org/wp-content/plugins/extended-comment-options/fjsh.htm</a>?rmjm=rmjm |
From: Gerald M. <GM...@ge...> - 2011-12-06 14:56:05
|
Sorry to bother you... I tried everything to resolve issue, but am getting nowhere. I'm running Ubuntu inside VMPlayer 4.0 and have installed Sleuthkit (3.1.3-1) and Autopsy (2.24-1) I cannot get autopsy to execute. I get a failure to connect to localhost:9999/autopsy. I've read many Internet postings relative to this problem, but nothing seems to work. From Ubuntu terminal I can ping localhost and/or 127.0.0.1. I know I'm probably doing something dumb. Any help you can provide would be greatly appreciated. Thanks, Jerry Miller Gerald C. Miller IST Assistant Professor Cisco Regional Academy Director Germanna Community College Security+ (540) 891-3038 |
From: Stefan K. <sk...@bf...> - 2011-12-06 15:12:34
|
Gerald, > I cannot get autopsy to execute. I get a failure to connect to localhost:9999/autopsy. Which browser are you using? Do you get any error messages/warning at all? Are there any corresponding entries in autopsy.log? Is a local firewall blocking access to port 9999? Cheers, Stefan. -- Stefan Kelm <sk...@bf...> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstrasse 100 Tel: +49-721-96201-1 D-76133 Karlsruhe Fax: +49-721-96201-99 |
From: Mark W. J. <mar...@cc...> - 2011-12-06 15:50:00
|
Hey Gerald, Is autopsy running? (Run: ps -ef | grep autopsy) Is it a browser problem or something in autopsy? ('telnet localhost 9999' then type 'GET / HTTP/1.0' and hit enter a few times. What happens?) Good luck, MJ On 12/06/2011 09:43 AM, Gerald Miller wrote: > Sorry to bother you... I tried everything to resolve issue, but am > getting nowhere. > I’m running Ubuntu inside VMPlayer 4.0 and have installed Sleuthkit > (3.1.3-1) and Autopsy (2.24-1) > I cannot get autopsy to execute. I get a failure to connect to > localhost:9999/autopsy. I've read many Internet postings relative to > this problem, but nothing seems to work. From Ubuntu terminal I can ping > localhost and/or 127.0.0.1. I know I’m probably doing something dumb. > Any help you can provide would be greatly appreciated. Thanks, Jerry Miller |
From: Angus M. <an...@n-...> - 2011-12-06 15:56:52
Attachments:
smime.p7s
|
Are you actually executing the autopsy script ? What messages does it display when it starts ? Which machine is running the browser ? On 6 Dec 2011, at 14:43, Gerald Miller wrote: > Sorry to bother you... I tried everything to resolve issue, but am getting nowhere. > I’m running Ubuntu inside VMPlayer 4.0 and have installed Sleuthkit (3.1.3-1) and Autopsy (2.24-1) > I cannot get autopsy to execute. I get a failure to connect to localhost:9999/autopsy. I've read many Internet postings relative to this problem, but nothing seems to work. From Ubuntu terminal I can ping localhost and/or 127.0.0.1. I know I’m probably doing something dumb. Any help you can provide would be greatly appreciated. Thanks, Jerry Miller > > Gerald C. Miller > IST Assistant Professor > Cisco Regional Academy Director > Germanna Community College > Security+ > (540) 891-3038 > ------------------------------------------------------------------------------ > Cloud Services Checklist: Pricing and Packaging Optimization > This white paper is intended to serve as a reference, checklist and point of > discussion for anyone considering optimizing the pricing and packaging model > of a cloud services business. Read Now! > http://www.accelacomm.com/jaw/sfnl/114/51491232/_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Gerald M. <GM...@ge...> - 2011-12-06 17:51:11
|
Everyone... thanks for quick reply. I'm running FireFox with no FW. I get this error message when I try to execute autopsy. " Can't open log: autopsy.log at /usr/share/autopsy/lib/Print.pm line 383" When I go into the perl script at line 383 this is what I see: " Open AUTOLOG,">>$::LOCKDIR|$lname" " outside quotes are mine As you can see I teach IST, mostly Cisco. We are trying to setup some forensics classes using open source software. We are a small community college and can't afford the commercial software. I'm not a programmer so I'm lost. Again, thanks for your reply. I'll keep trying. Jerry Gerald C. Miller IST Assistant Professor Cisco Regional Academy Director Germanna Community College Security+ (540) 891-3038 |
From: Angus M. <an...@n-...> - 2011-12-06 18:00:06
Attachments:
smime.p7s
|
OK - so the user running autopsy doesn't have write permission for the directory you set as the evidence locker, hence autopsy isn't starting. On 6 Dec 2011, at 17:50, Gerald Miller wrote: > Everyone... thanks for quick reply. > > I'm running FireFox with no FW. > > I get this error message when I try to execute autopsy. > > " Can't open log: autopsy.log at /usr/share/autopsy/lib/Print.pm line 383" > > When I go into the perl script at line 383 this is what I see: > > " Open AUTOLOG,">>$::LOCKDIR|$lname" " outside quotes are mine > > As you can see I teach IST, mostly Cisco. We are trying to setup some forensics classes using open source software. We are a small community college and can't afford the commercial software. I'm not a programmer so I'm lost. Again, thanks for your reply. I'll keep trying. Jerry > > Gerald C. Miller > IST Assistant Professor > Cisco Regional Academy Director > Germanna Community College > Security+ > (540) 891-3038 > ------------------------------------------------------------------------------ > Cloud Services Checklist: Pricing and Packaging Optimization > This white paper is intended to serve as a reference, checklist and point of > discussion for anyone considering optimizing the pricing and packaging model > of a cloud services business. Read Now! > http://www.accelacomm.com/jaw/sfnl/114/51491232/_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: kok p. c. <cha...@ya...> - 2012-01-26 16:39:20
|
http://gabrielforster.com/wp-content/themes/thesis-16/thesis-16/custom/cache/wmtrehv.html |
From: kok p. c. <cha...@ya...> - 2012-01-28 04:02:16
|
http://jalem.com/wp-content/themes/blogmag/delli/admin/images/wmtrehv.html |
From: kok p. c. <cha...@ya...> - 2012-01-29 14:37:07
|
http://greyfotografia.com/files/gimgs/nmseo.html |
From: Greg G. <gre...@ya...> - 2012-03-20 01:30:53
|
Greetings, I have quite a bit of experience with Windows and *nix and have been recently working with MacOS X. What strikes me as immediately noticeable about the most recent timeline of a MacOS X hfs+ image is that there is absolutely no file or directory deleted information as would usually be printed when creating timelines of Windows or Linux systems. Such as deleted or deleted-realloc in case of inode reallocation. I find nothing of the sort in the timeline for the MacOS disk. Seeing as I have retrieved some files from unallocated space, I am concerned that the timeline output is inaccurate and am wondering if it is usually the case where lines will be marked to signify deletions and reallocations for files and directories when working with hfs+ similar to NTFS or ext In general I am interested in learning if the timeline will show every path that has existed on the MacOS X hfs+ partition regardless if it has been deleted, or if I can be sure I am missing data generally found in a properly rendered timeline when dealing with other formats because this is a Mac disk. Thanks in advance for any reply, Greg |
From: Judson P. <jp...@at...> - 2012-03-20 03:57:52
|
Greg, The short answer is that SleuthKit does not support obtaining any information about deleted files on HFS+. The reason for this (the long answer) is that HFS+ does not use a mark-as-deleted system for deleting files. In HFS+, both file metadata and the disk's entire directory structure are stored in a b-tree file, the Catalog B-Tree. (Technically, in the leaf nodes of the b-tree.) When files are deleted, their entries are removed from the b-tree and the leaf node is rewritten. The leaf nodes always appear to be compact -- that is, there are no gaps between entries where deleted entries might live. So, you would only possible see deleted-file information in the unused space at the end of a node or in unused nodes (which could be produced by deleting a diretory). That's only the case if Mac OS X doesn't zero those areas when rewriting them. I haven't looked into whether such an approach would actually produce useable data. -- Judson ________________________________ From: Greg Grasmehr [mailto:gre...@ya...] Sent: Monday, March 19, 2012 9:31 PM To: sle...@li... Subject: [sleuthkit-users] (no subject) Greetings, I have quite a bit of experience with Windows and *nix and have been recently working with MacOS X. What strikes me as immediately noticeable about the most recent timeline of a MacOS X hfs+ image is that there is absolutely no file or directory deleted information as would usually be printed when creating timelines of Windows or Linux systems. Such as deleted or deleted-realloc in case of inode reallocation. I find nothing of the sort in the timeline for the MacOS disk. Seeing as I have retrieved some files from unallocated space, I am concerned that the timeline output is inaccurate and am wondering if it is usually the case where lines will be marked to signify deletions and reallocations for files and directories when working with hfs+ similar to NTFS or ext In general I am interested in learning if the timeline will show every path that has existed on the MacOS X hfs+ partition regardless if it has been deleted, or if I can be sure I am missing data generally found in a properly rendered timeline when dealing with other formats because this is a Mac disk. Thanks in advance for any reply, Greg |
From: Stefan K. <sk...@bf...> - 2012-03-20 11:24:00
|
Judson, > The reason for this (the long answer) is that HFS+ does not use a > mark-as-deleted system for deleting files. In HFS+, both file metadata and > the disk's entire directory structure are stored in a b-tree file, the > Catalog B-Tree. (Technically, in the leaf nodes of the b-tree.) When files > [...] Thanks a lot for sharing that information. Do you have a source for that info, or to put it differently, are there any (current) forensics books on HFS+? Cheers, Stefan. -- Stefan Kelm <sk...@bf...> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstrasse 100 Tel: +49-721-96201-1 D-76133 Karlsruhe Fax: +49-721-96201-99 |
From: Judson P. <jp...@at...> - 2012-03-20 13:24:16
|
> Thanks a lot for sharing that information. Do you have a source for > that info, or to put it differently, are there any (current) forensics > books on HFS+? I've used Apple's Tech Note 1150, which is out of date but otherwise good; Amit Singh's "Mac OS X Internals"; and the xnu source code, which contains Apple's HFS+ driver. Documentation on HFSX can also be used. The two filesystems are identical except for their two-byte signature and for the fact that HFSX can optionally be case-sensitive (wheras HFS+ is always case-insensitive). -- Judson |
From: Stefan K. <sk...@bf...> - 2012-03-20 15:51:10
|
Thanks to all who responded so swiftly. Much appreciated! Cheers, Stefan. -- Stefan Kelm <sk...@bf...> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstrasse 100 Tel: +49-721-96201-1 D-76133 Karlsruhe Fax: +49-721-96201-99 |
From: Greg G. <gre...@ya...> - 2012-03-20 16:01:49
|
I second that sentiment! Greg On Mar 20, 2012, at 08:50PDT, Stefan Kelm wrote: > Thanks to all who responded so swiftly. Much appreciated! > > Cheers, > > Stefan. > > -- > Stefan Kelm <sk...@bf...> > BFK edv-consulting GmbH http://www.bfk.de/ > Kriegsstrasse 100 Tel: +49-721-96201-1 > D-76133 Karlsruhe Fax: +49-721-96201-99 > > ------------------------------------------------------------------------------ > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Greg G. <gre...@ya...> - 2012-03-20 15:59:25
|
Hey thanks much for that information, I appreciate it. Greg On Mar 19, 2012, at 20:42PDT, Judson Powers wrote: > Greg, > > The short answer is that SleuthKit does not support obtaining any > information about deleted files on HFS+. |