openpacket-devel Mailing List for OpenPacket Tools (Page 7)
Brought to you by:
crazy_j,
taosecurity
This list is closed, nobody may subscribe to it.
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(25) |
Aug
(29) |
Sep
(6) |
Oct
(4) |
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2007 |
Jan
(4) |
Feb
|
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(3) |
Oct
(27) |
Nov
(3) |
Dec
(1) |
| 2008 |
Jan
(19) |
Feb
(16) |
Mar
(4) |
Apr
(8) |
May
(3) |
Jun
(15) |
Jul
(10) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
(5) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: David Belle-I. <ml...@im...> - 2006-08-04 15:54:58
|
Hi, I just realised that this will not work if multiple users try to log in with the same moderator user. So, I'll stay close while the demo is going and I'll try to manually assign you moderator rights. Thanks, David |
|
From: David Belle-I. <ml...@im...> - 2006-08-04 15:18:08
|
Hi everyone, You'll have a chance to test the version of openpacket that I started to develop. Server will be open Friday and Monday. First, it's important that you know that I don't like the design, I use it just to be able to show what I developped. Second, known bugs: when you upload a file and you ask to change the IP addresses, checksums are still incorrect. Third, moderators: when a file is uploaded a moderator needs to approve the file before others can see it. For this test, a user "moderator" will be created with the password "moderator". So, server will be open during this time: Friday: 4pm to 11pm and Monday: 1pm to 5pm URL: http://roach4.no-ip.org:8000 I would like to know everyone's opinions about the web site, hopefully everyone that tries it will write a little report (bugs that you might find, ideas that you think could be useful, and so on) on the mailing list. Thanks everyone, David |
|
From: Anthony J. <an...@pf...> - 2006-08-04 00:54:36
|
Tim Furlong wrote: > > On 8/3/06, *Jacob Ham* <ha...@gm... <mailto:ha...@gm...>> wrote: > >>> snip snip snip > > I think the major questions are whether ZFS will solve all the issues, > whether it will solve them better than any of the other possible > solutions, and whether it's worth either changing over to OpenSolaris or > porting ZFS to FreeBSD ourselves (I suspect the latter will be a rather > large job). > > -Tim > > There is this thread on freebsd-hackers: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=81033+0+archive/2006/freebsd-hackers/20060528.freebsd-hackers Dealing with ZFS. Looks like it's not to be counted on on FreeBSD anytime soon. ant |
|
From: Tim F. <fu...@cc...> - 2006-08-03 20:54:03
|
On 8/3/06, Jacob Ham <ha...@gm...> wrote: > > Hi All, > > On 8/3/06, Tim Furlong <fu...@cc...> wrote: > > Hi Jake, > > > > I was wondering if you could clarify what you mean about the file system > > handling the compression. Did you have something specific in mind? I'm > > thinking that unless the fs is remote, any compression that it does > would > > incur about the same number of CPU cycles as doing it inline. It might > work > > better on a multiprocessor system, but it wouldn't be too hard to have > > openpacket.org handle compression in a seperate process. > > > > Indeed, I had ZFS in mind for a file system. It is extremely > expandable, fast, provides data integrity, and has low CPU usage > compression. You can read more about it here if interested, > http://www.opensolaris.org/os/community/zfs/ . I think it really > doesn't matter now, but if we grow to hundreds of gigs of data, it > will definitely be something to think about. It looks good, I just have two questions: First, has it already been ported to FreeBSD, or would we have to run Solaris 10? My impression was that Richard is fairly keen on using FreeBSD as a platform, and the official ZFS FAQ says there's no official plans to port it to anything other than Solaris 10. Second, is anyone here knowledgeable about the CDDL (the license for opensolaris)? I took a quick look, but I'm not familiar with it, and I'd feel more comfortable if I could have someone (preferably a lawyer and preferably not one retained by Sun) tell us what we should be looking out for. In fact, that's probably also a good idea regardless of what we use; I'm not familiar enough even with the GPL and LGPL to know what the gotchas are as far as designing a publicly accessible system like this. For instance, the CDDL FAQ suggests that there may be issues with statically linking source files that are under different licenses. Another option would be to gather meta data once uploaded, gzip it > once, and always serve the file compressed. The only problem with > this is if we ever decide to reference captures in line (on the site > instead of having to download and open the capture in Wireshark). Say > if someone wants to describe a capture in detail, he could reference > lines 10-29, describe them, then move to 30-45 (assuming we had a > system in place like this). I think there are ways around that; for instance the reviewer could just upload WireShark screenshots, or the analysis submission could allow the user to specify packet numbers, then fill in the blanks by decompressing the file, extracting the info for the desired packets into the DB, and then recompressing the file. It'd be best to do that offline though, which would just mean that the interface to present analyses would have to be able to recognize a not-yet-complete operation and display <packet info pending> or something. I don't know what kind of systems we have here for use, revenue model > (advertising, donations etc?), or hosting issues. I assume Richard is > working on this. If we need to save bandwidth and space we could do > so in the design. > > > It would mainly have to store the torrent files, so > > volume of data wouldn't be as much of an issue as number of files. > > Honestly, we probably wouldn't have to store them as files, we could > > probably just store the contents of the torrent file in a DB and only > dump > > it to file long enough to send it to a user, and maybe have a cache of > > frequently-requested torrents. > > If we cache the most requested ones, it will be faster but then we are > back where we are now.... How would we store the cached files?! What > if there are 1000s of popular files we cache? I haven't looked, but I suspect that it's possible, with PHP or Ruby or directly through an apache module or such, to bypass the filesystem entirely and just have the web interface fetch the data straight from the DB. That would still involve the fs, of course, since the DB would be housed there, but it would be optimized by the DB software. It's easy enough to do in perl at least, you just output the appropriate HTTP header and dump the data down the pipe, regardless of where the data comes from. I don't expect that it would be much harder in the other frameworks. If you're worried about the sheer number of files on the filesystem, with ext2/ext3 at least, you can set the number of inodes created when you set up the filesystem; if you're going to have lots of small files, you create more than the default number of inodes (4% of the filesystem or something like that, I think). If you're more worried about access times (finding a file gets bloody slow with thousands of files in one directory), a standard trick is to radix sort into subdirectories. In this case, we could do that using the hash; i.e. use the first two or three hexadecimal characters of the hash as the name of a subdirectory in the base dir, the next two or three as the next subdirectory, etc. So if you had files with the following five hashes (I'll use the full hash as the filename for the example): 25A1078996BE4F57DD89ABD8692538A0FB64428D 25C69487E704607EC72D19D9E6E0552A47004F64 E4EABBA07718253835B74ADB8B276B2A45EC3F93 E4EBED96FB5CFF73922D15AA533032EB35A673E7 FCC4DF6660CB0E7C2ABFE439A7C423690B4CD7A6 you could create a tree like: ./25/A1/25A1078996BE4F57DD89ABD8692538A0FB64428D ./25/C6/25C69487E704607EC72D19D9E6E0552A47004F64 ./E4/EA/E4EABBA07718253835B74ADB8B276B2A45EC3F93 ./E4/EB/E4EBED96FB5CFF73922D15AA533032EB35A673E7 ./FC/C4/FCC4DF6660CB0E7C2ABFE439A7C423690B4CD7A6 I suggest two or three, because ext2 at least can't handle more than 32767 subdirectories including ./ and ../, so four would potentially cause problems. If we can go with ZFS, though, such kludges might not be necessary (*knock wood*). We'd probably have to do some testing to see for sure, though. So perhaps we should try and identify all of the issues we're worried about in the context of storage, and possible solutions? 1) Sheer number of bytes 1a) background built-in compression by ZFS 1b) automatic compression by openpacket.org on receipt of a trace (after summarization) 1c) "offshoring" large traces via bittorrent 1d) background compression on an ext3 filesystem (or whatever FS FreeBSD prefers) 2) number of files on the filesystem 2a) ZFS (need to confirm that it handles large numbers of files gracefully) 2b) FS tuning 2c) some sort of automated archival of less-used files 2d) files stored in DB instead of on fs 3) number of files in a given directory 3a) ZFS (need to confirm that directory seek time scales well for large directories) 3b) radix sorting 3c) pure DB handling of files served Have I missed anything, either concerns or possible solutions? I think the major questions are whether ZFS will solve all the issues, whether it will solve them better than any of the other possible solutions, and whether it's worth either changing over to OpenSolaris or porting ZFS to FreeBSD ourselves (I suspect the latter will be a rather large job). -Tim |
|
From: Jacob H. <ha...@gm...> - 2006-08-03 19:08:45
|
Hi All, On 8/3/06, Tim Furlong <fu...@cc...> wrote: > Hi Jake, > > I was wondering if you could clarify what you mean about the file system > handling the compression. Did you have something specific in mind? I'm > thinking that unless the fs is remote, any compression that it does would > incur about the same number of CPU cycles as doing it inline. It might work > better on a multiprocessor system, but it wouldn't be too hard to have > openpacket.org handle compression in a seperate process. > Indeed, I had ZFS in mind for a file system. It is extremely expandable, fast, provides data integrity, and has low CPU usage compression. You can read more about it here if interested, http://www.opensolaris.org/os/community/zfs/ . I think it really doesn't matter now, but if we grow to hundreds of gigs of data, it will definitely be something to think about. Another option would be to gather meta data once uploaded, gzip it once, and always serve the file compressed. The only problem with this is if we ever decide to reference captures in line (on the site instead of having to download and open the capture in Wireshark). Say if someone wants to describe a capture in detail, he could reference lines 10-29, describe them, then move to 30-45 (assuming we had a system in place like this). I don't know what kind of systems we have here for use, revenue model (advertising, donations etc?), or hosting issues. I assume Richard is working on this. If we need to save bandwidth and space we could do so in the design. > It would mainly have to store the torrent files, so > volume of data wouldn't be as much of an issue as number of files. > Honestly, we probably wouldn't have to store them as files, we could > probably just store the contents of the torrent file in a DB and only dump > it to file long enough to send it to a user, and maybe have a cache of > frequently-requested torrents. If we cache the most requested ones, it will be faster but then we are back where we are now.... How would we store the cached files?! What if there are 1000s of popular files we cache? Jake |
|
From: Richard B. <tao...@gm...> - 2006-08-03 18:29:07
|
On 8/3/06, Tim Furlong <fu...@cc...> wrote: > Hi Jake, > > I was wondering if you could clarify what you mean about the file system > handling the compression. Did you have something specific in mind? I'm > thinking that unless the fs is remote, any compression that it does would > incur about the same number of CPU cycles as doing it inline. It might work > better on a multiprocessor system, but it wouldn't be too hard to have > openpacket.org handle compression in a seperate process. > > One of the other ideas we've been kicking around a bit is the idea of having > the system based around BitTorrent; in which case, openpacket.org wouldn't > have to directly store many traces, probably just the ones awaiting > moderator approval. It would mainly have to store the torrent files, so > volume of data wouldn't be as much of an issue as number of files. > Honestly, we probably wouldn't have to store them as files, we could > probably just store the contents of the torrent file in a DB and only dump > it to file long enough to send it to a user, and maybe have a cache of > frequently-requested torrents. Anyway, those are optimizations that can be > done transparently when needed. The other thing is that BitTorrent uses > hash values as an integral part of identifying a particular file, so > integrity checking would be built-in. > > -Tim Hi all, I think we should consider BitTorrent for distributing the entire trace collection, or perhaps large portions of it. OpenPacket.org should always be the "seed of last resort" for these files -- we shouldn't hope that others can seed it. I expect the vast majority of traces to be small, so we should serve those without requiring BitTorrent. I personally wouldn't want to launch a BT client every time I want a small exploit trace or what have you. Thank you, Richard |
|
From: Tim F. <fu...@cc...> - 2006-08-03 18:24:53
|
Hi Jake, I was wondering if you could clarify what you mean about the file system handling the compression. Did you have something specific in mind? I'm thinking that unless the fs is remote, any compression that it does would incur about the same number of CPU cycles as doing it inline. It might work better on a multiprocessor system, but it wouldn't be too hard to have openpacket.org handle compression in a seperate process. One of the other ideas we've been kicking around a bit is the idea of having the system based around BitTorrent; in which case, openpacket.org wouldn't have to directly store many traces, probably just the ones awaiting moderator approval. It would mainly have to store the torrent files, so volume of data wouldn't be as much of an issue as number of files. Honestly, we probably wouldn't have to store them as files, we could probably just store the contents of the torrent file in a DB and only dump it to file long enough to send it to a user, and maybe have a cache of frequently-requested torrents. Anyway, those are optimizations that can be done transparently when needed. The other thing is that BitTorrent uses hash values as an integral part of identifying a particular file, so integrity checking would be built-in. -Tim On 8/3/06, Jacob Ham <ha...@gm...> wrote: > > Well, lets start out with somethings that we would like to have in a > storage structure: > > - easily scalable > - fast access > - compression? > - Do we want a API to directly access the files? > - Do we want the structure to be humanly accessible? > > The first two are easy. We can let the file system, and hardware > handle all that. We can develop Openpacket so it doesn't care if we > use NTFS or ZFS or whatever. > > We could use the filesystem to compress the data, or we could compress > the data (gzip etc). It would probably be best to have the file system > handle the compression. If we compress the data our selves, the cpu > cost may be to great for the amount of traces we will be storing. > Remember storage is cheaper than cpu time! :-) > > We could easily have an API to access the files. No matter what > framework we use, it could easily interface with the file system for > retrieval. > > Do we want the structure to be humanly accessible? I would say no. > It would force everyone to use the API or have direct access to the > DB. > > The structuring could go as follows. Once a packet is uploaded, a > hash/check sum can me taken of the file. This is store in the > database once uploaded. From there the packet is renamed to its > check sum, and stored on our servers. All metadata and information > about the capture is located in the DB on upload. To find a capture, > use the DB hash to access the file system and grab the file. > > This insures that: > 1. capture data can be checked against its original check sum to > determined if anything has changed or has been damaged. > 2. access can be fast > 3. if there are 2 of the same captures uploaded, it will reference > the same file (unless we find some collision!! haha) and won't take up > extra space. > > This is just a quick sum of ideas that popped in my head. > > Jake > > > On 8/3/06, Mark Mason <mas...@gm...> wrote: > > How will you organize the traces on the file system? > > > > If you're getting thousands of traces uploaded, will you need a file > > structure to organize the traces? > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Openpacket-devel mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > -- Tim Furlong tim...@gm... |
|
From: Jacob H. <ha...@gm...> - 2006-08-03 14:30:55
|
Well, lets start out with somethings that we would like to have in a storage structure: - easily scalable - fast access - compression? - Do we want a API to directly access the files? - Do we want the structure to be humanly accessible? The first two are easy. We can let the file system, and hardware handle all that. We can develop Openpacket so it doesn't care if we use NTFS or ZFS or whatever. We could use the filesystem to compress the data, or we could compress the data (gzip etc). It would probably be best to have the file system handle the compression. If we compress the data our selves, the cpu cost may be to great for the amount of traces we will be storing. Remember storage is cheaper than cpu time! :-) We could easily have an API to access the files. No matter what framework we use, it could easily interface with the file system for retrieval. Do we want the structure to be humanly accessible? I would say no. It would force everyone to use the API or have direct access to the DB. The structuring could go as follows. Once a packet is uploaded, a hash/check sum can me taken of the file. This is store in the database once uploaded. From there the packet is renamed to its check sum, and stored on our servers. All metadata and information about the capture is located in the DB on upload. To find a capture, use the DB hash to access the file system and grab the file. This insures that: 1. capture data can be checked against its original check sum to determined if anything has changed or has been damaged. 2. access can be fast 3. if there are 2 of the same captures uploaded, it will reference the same file (unless we find some collision!! haha) and won't take up extra space. This is just a quick sum of ideas that popped in my head. Jake On 8/3/06, Mark Mason <mas...@gm...> wrote: > How will you organize the traces on the file system? > > If you're getting thousands of traces uploaded, will you need a file > structure to organize the traces? > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > |
|
From: Mark M. <mas...@gm...> - 2006-08-03 13:51:01
|
How will you organize the traces on the file system? If you're getting thousands of traces uploaded, will you need a file structure to organize the traces? |
|
From: Tim F. <fu...@cc...> - 2006-08-03 04:06:58
|
---------- Forwarded message ---------- From: Tim Furlong <tim...@gm...> Date: Aug 3, 2006 12:05 AM Subject: Re: [Openpacket-devel] Pcap file checksum calculation To: David Belle-Isle <ml...@im...> Cc: ope...@li... You may also want to take a look at libnetdude ( http://netdude.sourceforge.net), particularly with the traceset and conntrack plugins, before you get too deep into writing your parser. There's also libqcap (http://www.sourceforge.net/projects/qcap) developed by one of my colleagues here at Carleton, which implements ip fragment handling and tcp tracking, and he's building a protocol parsing grammar on top of it (like ABNF, but able to handle non-ABNF things often found in protocols like length fields). The main drawback is that it's only really set up to handle IP over Ethernet, though I'll be adding support for Cisco HDLC tomorrow or early next week, and it's easy to add any datalink layer with a constant header length; just change the offset of the IP header. :P As for the question of flexibility, there'd undoubtedly be things that these libraries do that you'll look at and say, "I wouldn't have done it that way", but they may be worth a look to see if they'd give you the power you need for your purposes without constraining you too badly. -Tim On 8/2/06, Tim Furlong <fu...@cc...> wrote: > > > Which checksum are you looking at? I don't think PCAP files have checksums > of their own, so I'm assuming you're talking about the IP header checksum > and/or the TCP checksum. > > IP (RFC791, section 3.1, http://rfc.net/rfc791.html) - > Header Checksum: 16 bits > > A checksum on the header only. Since some header fields change > (e.g., time to live), this is recomputed and verified at each point > that the internet header is processed. > > The checksum algorithm is: > > The checksum field is the 16 bit one's complement of the one's > complement sum of all 16 bit words in the header. For purposes of > computing the checksum, the value of the checksum field is zero. > > This is a simple to compute checksum and experimental evidence > indicates it is adequate, but it is provisional and may be replaced > by a CRC procedure, depending on further experience. > > So to recap, you take the IP header, zero out the checksum, take a zeroed > out 16-bit register, and XOR it with each 16-bit chunk of the header. > > TCP checksum (RFC793, section 3.1, http://rfc.net/rfc793.html) - > The checksum field is the 16 bit one's complement of the one's > complement sum of all 16 bit words in the header and text. If a > segment contains an odd number of header and text octets to be > checksummed, the last octet is padded on the right with zeros to > form a 16 bit word for checksum purposes. The pad is not > transmitted as part of the segment. While computing the checksum, > the checksum field itself is replaced with zeros. > > The checksum also covers a 96 bit pseudo header conceptually > prefixed to the TCP header. This pseudo header contains the Source > Address, the Destination Address, the Protocol, and TCP length. > This gives the TCP protection against misrouted segments. This > information is carried in the Internet Protocol and is transferred > across the TCP/Network interface in the arguments or results of > calls by the TCP on the IP. > > +--------+--------+--------+--------+ > | Source Address | > +--------+--------+--------+--------+ > | Destination Address | > +--------+--------+--------+--------+ > | zero | PTCL | TCP Length | > +--------+--------+--------+--------+ > > The TCP Length is the TCP header length plus the data length in > octets (this is not an explicitly transmitted quantity, but is > computed), and it does not count the 12 octets of the pseudo > header. > > A bit more complex: you construct and prepend a 12-octet pseudoheader as > shown in the diagram using fields from the IP header before you start, and > you again zero out the checksum before you compute it. You also pad out the > payload to an even number of octets (so that you only have full 16-bit > chunks). Note that the checksum covers the TCP payload as well, so the > total number of octets checksummed should be "total length from IP header" - > ( "header length from IP header" x 4 ) + 12 for the pseudo header. The x4 > is because the header length from the IP header is given in 32-bit words, > not octets. Once you've constructed that, you again just iterate over each > 16-bit chunk and XOR it with your register. Alternately, I think you could > leave the old checksum in and just test that your new checksum is 0. > > Hope this helps, > -Tim > > > On 8/2/06, David Belle-Isle < ml...@im...> wrote: > > > > Hi everyone, > > > > I wonder if everyone could help me find how the checksum is calculated > > in > > a pcap file ?? > > > > Let me know if you have any idea, > > > > Thanks, > > > > David > > > > > > Also note that this friday I'll open up the server for people to test > > the > > version of openpacket I've developped so far! I'll send another email to > > give the details. > > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Openpacket-devel mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > > > > > > -- > Tim Furlong > tim...@gm... > -- Tim Furlong tim...@gm... -- Tim Furlong tim...@gm... |
|
From: Tim F. <fu...@cc...> - 2006-08-03 03:46:48
|
Which checksum are you looking at? I don't think PCAP files have checksums of their own, so I'm assuming you're talking about the IP header checksum and/or the TCP checksum. IP (RFC791, section 3.1, http://rfc.net/rfc791.html) - Header Checksum: 16 bits A checksum on the header only. Since some header fields change (e.g., time to live), this is recomputed and verified at each point that the internet header is processed. The checksum algorithm is: The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero. This is a simple to compute checksum and experimental evidence indicates it is adequate, but it is provisional and may be replaced by a CRC procedure, depending on further experience. So to recap, you take the IP header, zero out the checksum, take a zeroed out 16-bit register, and XOR it with each 16-bit chunk of the header. TCP checksum (RFC793, section 3.1, http://rfc.net/rfc793.html) - The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header and text. If a segment contains an odd number of header and text octets to be checksummed, the last octet is padded on the right with zeros to form a 16 bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros. The checksum also covers a 96 bit pseudo header conceptually prefixed to the TCP header. This pseudo header contains the Source Address, the Destination Address, the Protocol, and TCP length. This gives the TCP protection against misrouted segments. This information is carried in the Internet Protocol and is transferred across the TCP/Network interface in the arguments or results of calls by the TCP on the IP. +--------+--------+--------+--------+ | Source Address | +--------+--------+--------+--------+ | Destination Address | +--------+--------+--------+--------+ | zero | PTCL | TCP Length | +--------+--------+--------+--------+ The TCP Length is the TCP header length plus the data length in octets (this is not an explicitly transmitted quantity, but is computed), and it does not count the 12 octets of the pseudo header. A bit more complex: you construct and prepend a 12-octet pseudoheader as shown in the diagram using fields from the IP header before you start, and you again zero out the checksum before you compute it. You also pad out the payload to an even number of octets (so that you only have full 16-bit chunks). Note that the checksum covers the TCP payload as well, so the total number of octets checksummed should be "total length from IP header" - ( "header length from IP header" x 4 ) + 12 for the pseudo header. The x4 is because the header length from the IP header is given in 32-bit words, not octets. Once you've constructed that, you again just iterate over each 16-bit chunk and XOR it with your register. Alternately, I think you could leave the old checksum in and just test that your new checksum is 0. Hope this helps, -Tim On 8/2/06, David Belle-Isle <ml...@im...> wrote: > > Hi everyone, > > I wonder if everyone could help me find how the checksum is calculated in > a pcap file ?? > > Let me know if you have any idea, > > Thanks, > > David > > > Also note that this friday I'll open up the server for people to test the > version of openpacket I've developped so far! I'll send another email to > give the details. > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > -- Tim Furlong tim...@gm... |
|
From: David Belle-I. <ml...@im...> - 2006-08-02 22:26:06
|
Hi everyone, I wonder if everyone could help me find how the checksum is calculated in a pcap file ?? Let me know if you have any idea, Thanks, David Also note that this friday I'll open up the server for people to test the version of openpacket I've developped so far! I'll send another email to give the details. |
|
From: Richard B. <tao...@gm...> - 2006-08-01 01:28:12
|
On 7/31/06, Jacob Ham <ha...@gm...> wrote:
> Along the lines of meta information, it might be useful to either
> build our web app to get statistics about the capture or use tcpdstat
> (1) and store that as meta information.
>
> Jake
>
Jake and all,
Great idea -- I suggest capinfos instead of Tcpdstat, since the
protocol breakdown from Tcpdstat isn't very reliable. Capinfos gets
to the heart of what you probably like about Tcpdstat anyway.
Capinfos is packaged with Ethereal/Wireshark.
$ capinfos res_inn.lpc
File name: res_inn.lpc
File type: libpcap (tcpdump, Ethereal, etc.)
Number of packets: 28079
File size: 16661237 bytes
Data size: 16211949 bytes
Capture duration: 2867.890522 seconds
Start time: Thu Mar 30 21:57:25 2006
End time: Thu Mar 30 22:45:13 2006
Data rate: 5652.92 bytes/s
Data rate: 45223.34 bits/s
Average packet size: 577.37 bytes
Speaking of summarization, would statistics like this be of any use?
$ tethereal -nq -z io,phs -r res_inn.lpc
===================================================================
Protocol Hierarchy Statistics
Filter: frame
frame frames:28079 bytes:16211949
eth frames:28079 bytes:16211949
ip frames:27463 bytes:16179237
tcp frames:22417 bytes:15777169
http frames:1558 bytes:1116140
image-gif frames:81 bytes:56366
data-text-lines frames:330 bytes:322423
image-jfif frames:11 bytes:8028
malformed frames:9 bytes:6831
http frames:1 bytes:321
tcp.segments frames:283 bytes:202533
http frames:250 bytes:170943
data-text-lines frames:82 bytes:68786
image-gif frames:68 bytes:47104
media frames:2 bytes:2811
image-jfif frames:21 bytes:19560
ssl frames:33 bytes:31590
ssl frames:426 bytes:252570
malformed frames:19 bytes:11155
pop frames:7915 bytes:11446426
data frames:2 bytes:1755
smtp frames:53 bytes:20255
icmp frames:3984 bytes:221437
udp frames:1053 bytes:180145
dns frames:390 bytes:36121
snmp frames:23 bytes:2737
data frames:46 bytes:2284
bootp frames:109 bytes:37308
nbdgm frames:297 bytes:69841
smb frames:297 bytes:69841
mailslot frames:297 bytes:69841
browser frames:297 bytes:69841
syslog frames:35 bytes:8234
sip frames:20 bytes:9380
nbns frames:124 bytes:12920
http frames:6 bytes:1050
ntp frames:3 bytes:270
igmp frames:9 bytes:486
arp frames:616 bytes:32712
===================================================================
This is less detailed than what the Ethereal GUI provides, but still a
powerful breakdown by protocol.
Richard
|
|
From: Jacob H. <ha...@gm...> - 2006-07-31 16:06:09
|
Along the lines of meta information, it might be useful to either build our web app to get statistics about the capture or use tcpdstat (1) and store that as meta information. Jake 1: http://staff.washington.edu/dittrich/talks/core02/tools/tools.html On 7/30/06, Tim Furlong <fu...@cc...> wrote: > > Good point - I'd suggest the submitter's userid, a timestamp either of the > start of the capture or of the submission, and a short description/title > submitted by the user (probably a normalized version of a short free-text > title). For the timestamp, the start of capture would be better, but that > might be information that the user wants to be anonymized, in which case the > submission timestamp might be more appropriate. > > I don't know about putting too much more information in the filename; if we > want it to carry along a lot of meta-info, perhaps we should think about > including an associated text .readme or xml .meta_info file (then all of the > d/l files would have to be archives), or including fake packets in the trace > to carry meta-info (not exactly an elegant solution). I'm just thinking > that with all of the meta-info we could possibly include, the filename could > start getting a bit silly after a few revs. It might be better to just > allow users to look up all the meta-info on the website via filename or > md5sum or something, though I'd agree it would be nice to have all the > necessary info without having to go back to the website, for various reasons > ( e.g. lack of connectivity of either the user or the site). > > -Tim > > > On 7/30/06, Mark Mason <mas...@gm... > wrote: > > Hello - > > > > I'm still working on a Rails version of the site. > > No problem if you end up using David's version. > > > > Are you writing any information in the uploaded caps filenames? > > Source, application, os, date, version information? > > > > If I download caps from the site, how do I keep them organized? > > > > Take it easy. > > > > Mark > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Openpacket-devel mailing list > > Ope...@li... > > > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > > > > > > -- > Tim Furlong > tim...@gm... > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > > > |
|
From: Tim F. <fu...@cc...> - 2006-07-30 15:49:11
|
Good point - I'd suggest the submitter's userid, a timestamp either of the start of the capture or of the submission, and a short description/title submitted by the user (probably a normalized version of a short free-text title). For the timestamp, the start of capture would be better, but that might be information that the user wants to be anonymized, in which case the submission timestamp might be more appropriate. I don't know about putting too much more information in the filename; if we want it to carry along a lot of meta-info, perhaps we should think about including an associated text .readme or xml .meta_info file (then all of the d/l files would have to be archives), or including fake packets in the trace to carry meta-info (not exactly an elegant solution). I'm just thinking that with all of the meta-info we could possibly include, the filename could start getting a bit silly after a few revs. It might be better to just allow users to look up all the meta-info on the website via filename or md5sum or something, though I'd agree it would be nice to have all the necessary info without having to go back to the website, for various reasons (e.g. lack of connectivity of either the user or the site). -Tim On 7/30/06, Mark Mason <mas...@gm...> wrote: > > Hello - > > I'm still working on a Rails version of the site. > No problem if you end up using David's version. > > Are you writing any information in the uploaded caps filenames? > Source, application, os, date, version information? > > If I download caps from the site, how do I keep them organized? > > Take it easy. > > Mark > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > -- Tim Furlong tim...@gm... |
|
From: Mark M. <mas...@gm...> - 2006-07-30 15:16:21
|
Hello - I'm still working on a Rails version of the site. No problem if you end up using David's version. Are you writing any information in the uploaded caps filenames? Source, application, os, date, version information? If I download caps from the site, how do I keep them organized? Take it easy. Mark |
|
From: Tim F. <fu...@cc...> - 2006-07-29 04:30:56
|
Hi David, Ok great, that should help us get a feel for what you've already got in the works. :-) Thanks, -Tim On 7/29/06, David A. Belle-Isle <ml...@im...> wrote: > > Hi, > > I also have a DB schema. I'll convert it to PDF and send it as soon as I > can. > > Thanks, > > David > > > > Richard Bejtlich wrote: > > On 7/28/06, Tim Furlong <fu...@cc...> wrote: > >> Hi folks, > >> > >> I realize that this may have been overtaken by events, but thought I'd > toss > >> these out for discussion anyway. > > > > Hi Tim, > > > > We are definitely still in the early days. David's work is excellent, > > but I am sure he would also like the best possible schema as we go > > forward. > > > > Thank you, > > > > Richard > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Openpacket-devel mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > -- Tim Furlong tim...@gm... |
|
From: David A. Belle-I. <ml...@im...> - 2006-07-29 04:19:58
|
Hi, I also have a DB schema. I'll convert it to PDF and send it as soon as I can. Thanks, David Richard Bejtlich wrote: > On 7/28/06, Tim Furlong <fu...@cc...> wrote: >> Hi folks, >> >> I realize that this may have been overtaken by events, but thought I'd toss >> these out for discussion anyway. > > Hi Tim, > > We are definitely still in the early days. David's work is excellent, > but I am sure he would also like the best possible schema as we go > forward. > > Thank you, > > Richard > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > |
|
From: Richard B. <tao...@gm...> - 2006-07-29 02:26:21
|
On 7/28/06, Tim Furlong <fu...@cc...> wrote: > Hi folks, > > I realize that this may have been overtaken by events, but thought I'd toss > these out for discussion anyway. Hi Tim, We are definitely still in the early days. David's work is excellent, but I am sure he would also like the best possible schema as we go forward. Thank you, Richard |
|
From: Tim F. <fu...@cc...> - 2006-07-29 00:53:18
|
Ci0gbmV3IHVzZXIgcmVnaXN0cmF0aW9uCiAgLSB1c2VyIGNsaWNrcyAnY3JlYXRlIGFjY291bnQn CiAgLSB3ZWJzaXRlIG9idGFpbnMgdXNlciBpbmZvcm1hdGlvbgogIC0gd2Vic2lkZSBlbnRlcnMg dXNlciBpbmZvcm1hdGlvbiBpbiBEQiAoY29uZmlybWVkPSdOJykgYW5kIHNlbmRzIGNvbmZpcm1h dGlvbiBlLW1haWwgdG8KICAgIGFkZHJlc3MgcHJvdmlkZWQKICAtIHVzZXIgY2xpY2tzIG9uIGxp bmsgaW4gZS1tYWlsIC8gcmVwbGllcyB0byBlLW1haWwKICAtIHdlYnNpdGUvcHJvY21haWwtaW52 b2tlZCBzY3JpcHQgY29uZmlybXMgaWRlbnRpdHkgYW5kIHNldHMgY29uZmlybWVkPSdZJwoKLSAo c2luZ2xlKSB0cmFjZSBzdWJtaXNzaW9uCiAgLSB1c2VyIGNsaWNrcyAnc3VibWl0IHRyYWNlJwog IC0gd2Vic2l0ZSBwcmVzZW50cyBmb3JtIGZvciB0cmFjZSBtZXRhLWluZm8sIHRhZ3MsIGFuZCB1 cGxvYWQgbGluawogIC0gdXNlciBlbnRlcnMgbWV0YS1pbmZvIGFuZCBwcmVwYXJlcyB1cGxvYWQg CiAgLSB3ZWJzaXRlIHJlLXByZXNlbnRzIG1ldGEtaW5mbyAoYW5kIHBvc3NpYmx5IHRyYWNlIGhh c2gpIGFuZCBpc3N1ZXMgY2hhbGxlbmdlIHRvIHVzZXIgdG8KICAgIGNvbmZpcm0gYXV0aG9yaXph dGlvbgogIC0gdXNlciBjb25maXJtcyBhdXRob3JpemF0aW9uIChlbHNlIGJhaWwpCiAgLSB0cmFj ZSBpcyBzdG9yZWQgYW5kIG1hZGUgYWNjZXNzaWJsZSB0byBtb2RlcmF0b3JzIChhcHByb3ZlZD0n TicpCiAgLSBwb3NzaWJseSBzb21lIGF1dG9tYXRlZCBwcm9jZXNzIGlzIHJ1biB0byBhbmFseXpl IGFuZCBzdW1tYXJpemUgdGhlIHRyYWNlCiAgICAtIGUuZy4gaWRlbnRpZnkgZmlsZSBmb3JtYXQs IHJ1biBjYXBpbmZvcywgcnVuIHRjcHRyYWNlL2FyZ3VzIGFuZCBzdW1tYXJpemUgdGhlIG91dHB1 dCwgcnVuCiAgICAgIHAwZiB0byBwcm9maWxlIHRoZSBkb21pbmFudCBPU2VzLCBjaGVjayBmb3Ig Z2xvYmFsbHkgYWRkcmVzc2FibGUgSVBzLCBldGMuCgotIG1vZGVyYXRvciByZXZpZXdzIGFuIHVu YXBwcm92ZWQgdHJhY2UKICAtIHdlYnNpdGUgYWxsb3dzIGEgbW9kZXJhdG9yIHVzZXIgdG8gdmll dyBsaXN0IG9mIHRyYWNlcyBwZW5kaW5nIGFwcHJvdmFsIChhcHByb3ZlZD0nTicpIGFuZCB0bwog ICAgYWNjZXNzIHRoZWlyIG1ldGEtaW5mbyBhbmQgcmV2aWV3cyAobWF5IG9yIG1heSBub3Qgd2Fu dCB0byBhbGxvdyB2aWV3aW5nIG9mIHZvdGVzIGNhc3QpCiAgLSBtb2RlcmF0b3Igc2VsZWN0cyBh bmQgZG93bmxvYWRzIGEgdHJhY2UKICAtIG1vZGVyYXRvciByZXZpZXdzIHRoZSB0cmFjZQogIC0g d2Vic2l0ZSBhbGxvd3MgbW9kZXJhdG9yIHRvIGVpdGhlciByZXZpZXcgKHN1Ym1pdCByZXZpZXcg dG8gdHJhY2VfcmV2aWV3cyBwbHVzIHZvdGUgdG8KICAgIHRyYWNlX3ZvdGVzKSBvciBzaW1wbHkg dm90ZSBvbiAoanVzdCB0cmFjZV92b3RlKSB0aGUgdHJhY2UKICAgIC0gY291bGQgc3VnZ2VzdCB0 YWdzIHRvIGJlIGFkZGVkL3JlbW92ZWQgaW4gcmV2aWV3cwogIAotIGV4ZWN1dGl2ZSB1c2VyIGFw cHJvdmVzIGEgdHJhY2UKICAtIHdlYnNpdGUgYWxsb3dzIGFuIGV4ZWN1dGl2ZSB1c2VyIHRvIHZp ZXcgbGlzdCBvZiB0cmFjZXMgYXdhaXRpbmcgYXBwcm92YWwgKHRob3NlIHdpdGgKICAgIGFwcHJv dmVkPSdOJywgd2l0aCBtb3JlIHBvc2l0aXZlIHZvdGVzIHRoYW4gYSBnaXZlbiB0aHJlc2hvbGQs IGFuZCB3aXRoIGEgc2ltcGxlIG1ham9yaXR5IG9mCiAgICBwb3NpdGl2ZSB2b3RlcywgY29tcGFy ZWQgdG8gdG90YWwgdm90ZXMgZm9yIHRoZSB0cmFjZSkgCiAgICAtIGFsc28gYWxsb3dzIHRoZW0g dG8gYWNjZXNzIGFueSB0cmFjZSdzIG1ldGEtaW5mbywgcmV2aWV3cywgYW5kIHZvdGVzCiAgLSBl eGVjdXRpdmUgbWFrZXMgdGhlIGRlY2lzaW9uIHRvIGFwcHJvdmUgYSB0cmFjZSwgYW5kIGRvZXMg c28gKGFwcHJvdmVkPSdZJywgYXBwcm92ZWRfYnk9dXNlcikKICAtIGV4ZWN1dGl2ZSBjYW4gYWRk IG9yIHJlbW92ZSB0YWdzIGFzIGFwcHJvcHJpYXRlCiAgLSB3ZWJzaXRlIGNyZWRpdHMgdXNlciB3 aG8gc3VibWl0dGVkIHRoZSB0cmFjZQoKLSB1c2VyIHNlYXJjaGVzIGZvciB0cmFjZXMgdXNpbmcg YSA8Zm9vOiB0YWd8bWV0YS1pbmZvIHZhbHVlfGRlc2NyaXB0aW9uIGtleXdvcmQ+CiAgLSB1c2Vy IHNlbGVjdHMgYSA8Zm9vPgogIC0gd2Vic2l0ZSBwcmVzZW50cyB0cmFjZXMgd2l0aCB0aGF0IDxm b28+LCBzb3J0ZWQgYnkga2FybWEKCi0gdXNlciByZXZpZXdzL2FuYWx5emVzIGEgdHJhY2UgKGFs c28gYXBwbGllcyB0byBtb2RlcmF0b3IgcmV2aWV3cyBvZiB0cmFjZXMgcGVuZGluZyBhcHByb3Zh bCkKICAtIHdlYnNpdGUgYWxsb3dzIHVzZXIgdG8gc3VibWl0OgogICAgLSBhIHJhdGluZyBmb3Ig dGhlIHRyYWNlIChwZXJoYXBzIGJldHdlZW4gLTMgYW5kIDMsIG9yIC01IGFuZCA1LCBvciAwIHRv IDEwLCBldGMuKQogICAgICAtIGJhc2VkIG9uIHRoZSB1dGlsaXR5IG9mIHRoZSB0cmFjZSBvdmVy YWxsCiAgICAtIGEgdGV4dCByZXZpZXcvYW5hbHlzaXMKICAgIC0gYSBsaXN0IG9mIHRvb2xzIHVz ZWQgaW4gdGhlIGFuYWx5c2lzCiAgICAtIHRhZ3M/ICBlaXRoZXIgZm9yIHRoZSByZXZpZXcgb3Ig Zm9yIHRoZSB0cmFjZQogIC0gcmV2aWV3IGlzIHN1Ym1pdHRlZCBhbmQgdGhlIHRyYWNlJ3Mga2Fy bWEgaXMgdXBkYXRlZCAKICAgIC0gZS5nLiBtb2RpZmllZCBieSAoIHJhdGluZyAqIHVzZXIua2Fy bWEgKSAvIG1heF9yYXRpbmcKICAgIC0ga2FybWEgd2lsbCBwcm9iYWJseSBoYXZlIHRvIGJlIHJl Y2FsY3VsYXRlZCBwZXJpb2RpY2FsbHkgYXMgdXNlcnMnIGthcm1hIGNoYW5nZXMKICAtIHVzZXIg d2hvIHN1Ym1pdHRlZCB0aGUgdHJhY2UgaGFzIHRoZWlyIGthcm1hIHVwZGF0ZWQKICAgIC0gbGVz cyBkcmFtYXRpY2FsbHkgdGhhbiB0aGUgdHJhY2UsIHByb2JhYmx5IGFsc28gbW9kaWZpZWQgYnkg aG93IHJlY2VudGx5IHRoZSB0cmFjZSB3YXMKICAgICAgc3VibWl0dGVkIGFuZCBwb3NzaWJseSBi eSBob3cgbWFueSB0cmFjZXMgdGhleSBoYXZlIHN1Ym1pdHRlZAoKLSB1c2VyICdtb2RzJyBhIHRy YWNlL3JldmlldwogIC0gKnB1bnQqCgo= |
|
From: David A. Belle-I. <ml...@im...> - 2006-07-28 20:47:48
|
Hi everyone, Here are some details about what is developped so far: - A new user can sign up - A logged in user can upload a pcap file - Each uploaded files need to be approved by a moderator - Once a file is approved everyone can see a file - Logged in users can post comments/analysis on a particular file - Everyone can download the pcap file using the link on the viewfile page. - RSS feeds are implemented for: uploaded files, posted comments and posted comments of a particular file. These are three different feeds. - Uploaded pcap files are parsed to display the first 5 packets session information when viewing this file. (*) See note below. (*) I wrote this parser for this project. This makes it a lot easier for us to work with it if we need to change anything to what we parse or how we parse it. An interesting idea would be to allow the user to select an option to change the IP addresses in the file he uploads. This way, the system could generate random IP addresses and replace it in the file. We are currently working at setting up a test server to have a better idea of how it all works together. Thanks, David |
|
From: Richard B. <tao...@gm...> - 2006-07-28 20:31:22
|
On 7/28/06, Jacob Ham <ha...@gm...> wrote: > Hello. I just joined the list, and interested in helping with the > development of OpenPacket.org I have to admit I was pretty excited > about such a site once I had found Richard's blog. :-) > > I read the previous post that Mark Mason had made about creating the > site on the Ruby on Rails framework. Although I really like the > framework, I don't think it fits the site requirements. > > My recommendation would be Django (http://www.djangoproject.com). It > has many things built in, and we wouldn't have to spend the time > re-creating them. This includes comments (very extensible), > administration, user management, and caching. I am not going to just > give you a run down on what it all can do, the web site does a very > good job at that. > > Because Django has very good quality web applications built in, it > already gives us a great head start on the site. I have been working > on a prototype application, and if I have time, I might have something > this weekend to show. > > I will let you know when I have more done. > > Jake > Hi Jake, David is building a prototype in Django. I expect he will share some details soon, so I will wait before saying more. Thank you, Richard |
|
From: Bamm V. <bam...@gm...> - 2006-07-28 19:56:27
|
Another option may be to have an anonymous upload (write no read) FTP server similar to what SourceForge supplies for file releases. One could upload all their files to that server, then tag/add them via a web interface. The server could be set up to automatically delete any files that have been "moved" in x hours. Bammkkkk On 7/28/06, Tim Furlong <fu...@cc...> wrote: > > > If it's going to be a regular thing, it might be worth building an > alternate, non-web-based, backchannel, especially if the meta-info can be > automatically generated. If openpackets.org is run off a database, then it > wouldn't be too hard to set up a small application with a simple protocol to > listen to a port and accept submissions from a remote cron job or script and > populate the database appropriately. > > > On 7/28/06, Richard Bejtlich <tao...@gm...> wrote: > > On 7/28/06, David A. Belle-Isle <ml...@im...> wrote: > > > Hi Jacob, > > > > > > Thanks for the input. > > > > > > I agree with you if we needed to upload a couple of files at the > > > sametime but we are talking about thousands of files here. I doubt the > > > user would be interested in clicking "Browse..." a thousand time! :) > > > > > > > David is right -- I have a lead on a source you might supply somewhere > > around 1500 traces per month. > > > > Perhaps once David or I can make David's demo accessible to others, > > you will see the problem we are trying to solve. > > > > Sincerely, > > > > Richard > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Openpacket-devel mailing list > > Ope...@li... > > > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > > > > > > -- > > Tim Furlong > tim...@gm... > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > > > -- sguil - The Analyst Console for NSM http://sguil.sf.net |
|
From: Jacob H. <ha...@gm...> - 2006-07-28 19:47:37
|
Hello. I just joined the list, and interested in helping with the development of OpenPacket.org I have to admit I was pretty excited about such a site once I had found Richard's blog. :-) I read the previous post that Mark Mason had made about creating the site on the Ruby on Rails framework. Although I really like the framework, I don't think it fits the site requirements. My recommendation would be Django (http://www.djangoproject.com). It has many things built in, and we wouldn't have to spend the time re-creating them. This includes comments (very extensible), administration, user management, and caching. I am not going to just give you a run down on what it all can do, the web site does a very good job at that. Because Django has very good quality web applications built in, it already gives us a great head start on the site. I have been working on a prototype application, and if I have time, I might have something this weekend to show. I will let you know when I have more done. Jake |
|
From: Tim F. <fu...@cc...> - 2006-07-28 19:41:11
|
If it's going to be a regular thing, it might be worth building an alternate, non-web-based, backchannel, especially if the meta-info can be automatically generated. If openpackets.org is run off a database, then it wouldn't be too hard to set up a small application with a simple protocol to listen to a port and accept submissions from a remote cron job or script and populate the database appropriately. On 7/28/06, Richard Bejtlich <tao...@gm...> wrote: > > On 7/28/06, David A. Belle-Isle <ml...@im...> wrote: > > Hi Jacob, > > > > Thanks for the input. > > > > I agree with you if we needed to upload a couple of files at the > > sametime but we are talking about thousands of files here. I doubt the > > user would be interested in clicking "Browse..." a thousand time! :) > > > > David is right -- I have a lead on a source you might supply somewhere > around 1500 traces per month. > > Perhaps once David or I can make David's demo accessible to others, > you will see the problem we are trying to solve. > > Sincerely, > > Richard > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Openpacket-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openpacket-devel > -- Tim Furlong tim...@gm... |