Re: [Nfsen-discuss] Can I aggregate all files captured on one day into one?
Netflow visualisation and investigation tool
Brought to you by:
phaag
|
From: Adrian P. <adr...@ro...> - 2006-08-03 05:44:11
|
Thank you kindly for your reply. Now it's much clearer. I thought I could get away with some tweaks here and there... After reading your mail, I don't think I'll try to change things, but I'll test a thing or two (because "unexpected results and trouble" is my middle name) :) Thank you again for your patience, Adrian Popa Peter Haag wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > In general: It is not recommended to manipulate any files of a living NfSen environment. If you do not exactly know, how it works, things may break. But it's free software do whatever you like to do. > > - -------- Original Message -------- > From: Adrian Popa <adr...@ro...> > To: ha...@sw... > Subject: Re:[Nfsen-discuss] Can I aggregate all files captured on one day into one? > Date: Wed Aug 02 2006 13:51:30 GMT+0200 (CEST) > > >> Are you saying that I can delete the files and the graphs will still >> > > Yes. > > >> show up? This is what I wanted... By the way, where is this data stored? >> (the data used to build the graphs?) >> > > Graph data is stored in the rrd data files located in $PROFILESTATDIR > > >> But what happens when I try to run a TOP 10 on a time period where I >> have no files? - Probably I'll get no output. >> > > Any flow processing will obviously fail for those timeslices, you delete. > > >> What would happen when I try to run a TOP 10 on a time period where I >> have only one file (instead of 100)? Will data be read from the existing >> file? (assuming that the path and filename is recognizable by nfsen). >> >> > > As of current nfdump versions, the first and last file in a file sequence must exist. Files in between are skipped, if they do not exist. > > > >> Here's an example: >> >> file1: nfcapd.200608010850 >> file2: nfcapd.200608010855 >> ... >> file100: nfcapd.200608011715 >> >> file1 to file100 have been captured by nfcapd. After a while I run a >> script that takes these 100 files and generates an aggregated file >> called (let's say) nfcapd.200608010850 (overwriting the first file). I >> then delete the rest of the files. >> > > What for? file1 .. file100 will take approx. the same amount of disk space. You will not gain any disk space. Think of 'cat file1 ... file100 > file' . It's similar. As of now, aggregated flows can > not be written into binary files. > > >> What I want is this: When I use the web interface, I want to see the >> graphics for the period 08:50 - 17:15, and if I run any nfdump commands >> on this period (selecting 8:50 as the starting time) I want to get some >> statistics from the 8:50 aggregated file. >> > > It does not really bring you anything. Unless you really asking for unexpected results and trouble .. > I just write all this stuff to explain, how things work. > > - Peter > > >> Can this work? >> >> Thank you for your time, >> Adrian Popa >> >> >> Peter Haag wrote: >> Hi Adrian, >> I don't know, if I understood your question correctly. Anyway some >> thoughts: >> >> If you create any file from several input files for archiving purpose, >> and store that file outside of NfSen data space, you can take any >> filename of your choice. >> >> If you change files, ( with a cron job or somehow else ), not yet >> expired within the NfSen data space, the naming must conform, to the >> way nfcapd produces the files, otherwise you can no longer >> access the files for processing. In theory there may be holes ( >> missing files ) between files, however, changing files in general is a >> bad idea. Your changes will not be reflected in the graphs, as >> the data for the graphs is evaluated once only, right after creating >> the file. >> >> Does that help? >> >> - Peter >> >> >> -------- Original Message -------- >> From: Adrian Popa <adr...@ro...> >> To: nfs...@li... >> Subject: [Nfsen-discuss] Can I aggregate all files captured on one day >> into one? >> Date: Wed Aug 02 2006 10:13:18 GMT+0200 (CEST) >> >> >> >>>>> Hello, >>>>> >>>>> Because of limited space on the server I'm using for nfdump and nfsen >>>>> I can only keep about 33 hours of flow data. I would like to create a >>>>> script that runs automatically once a day (using a cron job) to >>>>> aggregate data from the files specified in a period of time. This can >>>>> be done with a "find" that searches for a certain pattern or files >>>>> created between time1 and time2. >>>>> >>>>> I know nfdump can create binary files containing the flows that >>>>> resulted from applying a filter, and I have the syntax for that, but >>>>> my question is this: >>>>> >>>>> If I aggregate 100 files that hold 5 minutes of flows each into a >>>>> single file that holds the information of the 100 most bandwidth >>>>> consuming flows in the specified time period, how should this file be >>>>> called? I want to delete the first 100 files. I want the aggregated >>>>> file to be used by nfsen, to display usage graphics for that period >>>>> of time, although I don't expect to get much information out of it if >>>>> I run searches on that time period. >>>>> >>>>> Will nfsen complain if it has to process a file that has flow >>>>> information for more than 5 minutes of flows? Or will it work without >>>>> special intervention? >>>>> >>>>> Thank you for your help, >>>>> Adrian Popa >>>>> >>>>> >>>>> ------------------------------------------------------------------------- >>>>> >>>>> Take Surveys. Earn Cash. Influence the Future of IT >>>>> Join SourceForge.net's Techsay panel and you'll get the chance to >>>>> share your >>>>> opinions on IT & business topics through brief surveys -- and earn cash >>>>> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV >>>>> >>>>> _______________________________________________ >>>>> Nfsen-discuss mailing list >>>>> Nfs...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/nfsen-discuss >>>>> >>>>> >>>>> >> -- >> _______ SWITCH - The Swiss Education and Research Network ______ >> Peter Haag, Security Engineer, Member of SWITCH CERT >> PGP fingerprint: D9 31 D5 83 03 95 68 BA FB 84 CA 94 AB FC 5D D7 >> SWITCH, Limmatquai 138, CH-8001 Zurich, Switzerland >> E-mail: pet...@sw... Web: http://www.switch.ch/security >> > > - -- > _______ SWITCH - The Swiss Education and Research Network ______ > Peter Haag, Security Engineer, Member of SWITCH CERT > PGP fingerprint: D9 31 D5 83 03 95 68 BA FB 84 CA 94 AB FC 5D D7 > SWITCH, Limmatquai 138, CH-8001 Zurich, Switzerland > E-mail: pet...@sw... Web: http://www.switch.ch/security > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.3 (Darwin) > > iQCVAwUBRNCtTv5AbZRALNr/AQJJbwP/SEnR8NLm+M7JOsXvo9rs88CYJSCvIOij > 0Z66y/qpPTUaP0r3F5l2SFbjmDlE94r3Jwe+NBef13IdfDMeeOVc4oNGwnVKdKbj > kO9Ge3BhbWOpRMuQ+Z1m6p/80EaCAkRU1tmKJiA4Tw1/Ept0/nsA6DXHVRTZU3cE > 0dF7tjRvW8U= > =3R8o > -----END PGP SIGNATURE----- > > |