|
From: Javier G. <god...@sp...> - 2003-03-27 15:52:21
|
Axelle Here is our answer to your detailed questions... >- Regarding legal evidence: > The document says that logs produced by computers are not admissible > as evidence unless it can be shown that they have not been modified. Here we were citing section 69 of the Police and Criminal Evidence Act. A British law stating the admissibility of computer generated information. > ==> is this a requirement specific to the U.S, or a general > requirement from C2 specs ? (I think it's the first). In the US, generally, logs produced by computers are submitted as hearsay evidence, this is not part of a requirement for C2. In talking with several Legal forensics experts though, up to this point there have been no major Challenges to the validity of the data. > ==> to my understanding, this is not exactly true. I had the feeling > that the court retained evidence with different levels of trust > regarding the evidence. For instance a signed text would have more > impact than unsigned text, but all texts were admissible in front of > the court. I am unsure of this. Any feedback ? And as a matter of > fact, I had also heard of cases where computer data had been retained > as legal evidence, though that data did not have any digital signature > for instance. Have you heard this too ? Have laws changed since ? Again, from what we've heard from the "legal experts," log data is typically Entered as evidence, but we're concerned about the situation that might arise from someone challenging the data as valid, since the case could be made that most log data can easily be modified. We want to provide a way to produce audit logs that are forensically sound, similar to the way physical evidence is collected. > ==> more exactly, do you mean somebody has to prove computer data has > not been modified (-- meaning it is unfeasible without detection), or > do you mean for data not be retained as evidence one should prove it > has been modified ? can you provide more references about those facts? Besides providing C2 type of data, what we're shooting for is similar to what physical forensics examiners have to do, maintain a chain of custody. We're developing a way to "maintain a chain of custody" of audit data from the kernel to storage on disk. We want the data protected and the process shown to be forensically sound, so that data recorded will stand up to any challenge in a court. > ************** > - about the "little files" that are kept on the client before being > sent to the logging server: ===> are they digitally signed ? wouldn't > it be possible for an intruder to corrupt those files before they get > sent to the logging server ? There were lots of trade-offs with the little files. Currently they are not signed (performance issues) however we expect to be implementing a process that will sign "little files" prior to being sent to the server. The signature will be used to verify their integrity while on the client. The signature should not be required on the data during transmission or once stored so we do not see a requirement to actually send the signatures. > ************** > - performance issues: > ==> I want to make sure. If the audit daemon is stopped (auditd) on > the client, then actually, the system calls continue to be audited and > to fill the kernel buffer space allocated for this. If the daemon is > stopped too long, then audited data may be lost. On the other hand, if > the audit daemon is restarted before the buffer is filled, then > actually nothing is lost. Right ? That's the way it's designed however we do not believe that is currently the way it's implemented. At least from the default scripts point of view. Currently the kernel buffers data for a period of time. When auditd runs it sits in a loop and empties the kernel's buffer space. It then stores the data to disk. If auditd is killed the buffer space will fill up again. When auditd is restarted, we believe that is clears the buffer space and begins the process again. We are looking at a number of security issues on the client with boundary conditions. > ==> there seem to have been some benchmark tests. Would it possible to > publish those results ? What's the overhead induced by SAL ? how were > the benchmarks performed ? We have no benchmark tests to release at this time... The SAL Team |