Re: [sleuthkit-users] Regular Expressions
Brought to you by:
carrier
From: <slo...@gm...> - 2016-11-15 01:39:09
|
I favor complex regex, option 1, enhanced with with Simson’s boundary solution, if possible. From: Simson Garfinkel Sent: Monday, November 14, 2016 2:23 PM To: Brian Carrier Cc: sle...@li... users Subject: Re: [sleuthkit-users] Regular Expressions Brian, With respect to #1 - I solved this problem with bulk_extractor by using an overlapping margin. Extend each block 1K or so into the next block. The extra 1k is called the Margin. Only report hits on string search if the text string beings in the main block, not if it begin in the margin (because then it is included entirely in the next block). You can tune the margin size to describe the largest text object that you wish to find with search. Simson > On Nov 14, 2016, at 5:14 PM, Brian Carrier <ca...@sl...> wrote: > > Making this a little more specific, we seem to have two options to solve this problem (which is inherent to Lucene/Solr/Elastic): > > 1) We store text in 32KB chunks (instead of our current 1MB chunks) and can have the full power of regular expressions. The downside of the smaller chunks is that there are more boundaries and places where a term could span the boundary and we could miss a hit if it spans that boundary. If we needed to, we could do some fancy overlapping. 32KB of text is about 12 pages of English text (less for non-English). > > 2) We limit the types of regular expressions that people can use and keep our 1MB chunks. We’ll add some logic into Autopsy to span tokens, but we won’t be able to support all expressions. For example, if you gave us “\d\d\d\s\d\d\d\d” we’d turn that into a search for “\d\d\d \d\d\d\d”, but we wouldn’t able to support a search like “\d\d\d[\s-]\d\d\d\d”. Well we could in theory, but we dont’ want to add crazy complexity here. > > So, the question is if you’d rather have smaller chunks and the full breadth of regular expressions or a more limited set of expressions and bigger chunks. We are looking at the performance differences now, but wanted to get some initial opinions. > > > > >> On Nov 14, 2016, at 1:09 PM, Brian Carrier <ca...@sl...> wrote: >> >> Autopsy currently has a limitation when searching for regular expressions, that spaces are not supported. It’s not a problem for Email addresses and URLs, but becomes an issue phone numbers, account numbers, etc. This limitation comes from using an indexed search engine (since spaces are used to break text into tokens). >> >> We’re looking at ways of solving that and need some guidance. >> >> If you write your own regular expressions, can you please let me know and share what they look like. We want to know how complex the expressions are that people use in real life. >> >> Thanks! >> ------------------------------------------------------------------------------ >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org ------------------------------------------------------------------------------ _______________________________________________ sleuthkit-users mailing list https://lists.sourceforge.net/lists/listinfo/sleuthkit-users http://www.sleuthkit.org |