You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Vincenzo V. <vin...@gm...> - 2006-10-11 14:42:55
|
hi all, sorry for I'm late. I tried to install the 0.4b41 but I've received these errors in the make output, tx in advance. ./configure checking whether make sets $(MAKE)... yes checking whether ln -s works... yes checking for cp... /bin/cp checking for mv... /bin/mv checking for rm... /bin/rm checking for ar... ar checking for ranlib... ranlib checking for patch... patch checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for a BSD-compatible install... /usr/bin/install -c checking how to run the C preprocessor... gcc -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for sys/types.h... (cached) yes Not including debugging code by default Linking dump and restore dynamically by default Linking libz and libbz2 dynamically by default Compiling rmt by default Not compiling ermt by default Not compiling kerberos extensions by default Including readline support by default Using new style F script by default Enabling Large File System support by default Enabling Quick File Access support by default Not including Quick File Access debugging code by default Not including Mac OSX restore compatibility code by default CC defaults to gcc LD defaults to gcc BINOWNER defaults to root BINGRP defaults to tty BINMODE defaults to 0755 MANOWNER defaults to man MANGRP defaults to tty MANMODE defaults to 0644 DUMPDATESPATH defaults to ${prefix}/etc/dumpdates checking for ext2fs/ext2fs.h... yes checking for ext2fs_open in -lext2fs... yes checking for ext2fs_read_inode_full in -lext2fs... no checking for ext2fs/ext2_fs.h... yes checking for ext2_ino_t type in libext2fs headers... yes checking for s_journal_inum field in ext2_super_block struct... yes checking for blkid/blkid.h... no checking for blkid_get_devname in -lblkid... no checking for tgetent in -lncurses... yes checking for tgetent in -ltermcap... yes checking for readline/readline.h... yes checking for readline in -lreadline... yes checking for rl_completion_matches in -lreadline... yes checking for rl_completion_append_character in -lreadline... yes checking for zlib.h... yes checking for compress2 in -lz... yes checking for bzlib.h... yes checking for BZ2_bzBuffToBuffCompress in -lbz2... yes checking for err... yes checking for errx... yes checking for verr... yes checking for verrx... yes checking for vwarn... yes checking for vwarnx... yes checking for warn... yes checking for warnx... yes checking for realpath... yes checking for lchown... yes checking for glob... yes checking for extended glob routines... yes checking for quad_t... yes checking for u_quad_t... yes configure: creating ./config.status config.status: creating MCONFIG config.status: creating Makefile config.status: creating common/Makefile config.status: creating compat/include/Makefile config.status: creating compat/lib/Makefile config.status: creating dump/Makefile config.status: creating restore/Makefile config.status: creating rmt/Makefile config.status: creating config.h config.status: config.h is unchanged [root@xyz dump-0.4b41]# make for i in compat/lib compat/include common dump restore rmt; do \ (cd $i && make all) || exit 1; \ done make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/compat/lib' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/compat/lib' make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/compat/include' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/compat/include' make[1]: Entering directory `/home/vincenzo/dump/dump-0.4b41/common' gcc -c -D_BSD_SOURCE -D_USE_BSD_SIGNAL -g -O2 -pipe -I.. -I../compat/include -I../dump -DRDUMP -DRRESTORE -DLINUX_FORK_BUG -DHAVE_LZO -D_PATH_DUMPDATES=\"/us r/local/etc/dumpdates\" -D_DUMP_VERSION=\"0.4b41\" dumprmt.c -o dumprmt.o In file included from ../compat/include/bsdcompat.h:14, from dumprmt.c:56: /usr/include/ext2fs/ext2fs.h:700: parse error before `FILE' /usr/include/ext2fs/ext2fs.h:757: parse error before `FILE' /usr/include/ext2fs/ext2fs.h:763: parse error before `)' /usr/include/ext2fs/ext2fs.h:764: parse error before `FILE' /usr/include/ext2fs/ext2fs.h:767: parse error before `)' /usr/include/ext2fs/ext2fs.h:797: parse error before `FILE' make[1]: *** [dumprmt.o] Error 1 make[1]: Leaving directory `/home/vincenzo/dump/dump-0.4b41/common' make: *** [all] Error 1 |
From: Stelian P. <st...@po...> - 2006-10-05 15:13:06
|
[It seems that Sourceforge's mailing lists just woke up...] Le jeudi 05 octobre 2006 =E0 16:02 +0200, Vincenzo Versi a =E9crit : > Hi all, > I installed the 0.4b22 version of dump and I put the command: > >nohup dump -0Mf dumpL0.etc -z2 /space & [...] > DUMP: write error 2591430 blocks into volume 1: File too large=20 File too large means that your kernel does not support files bigger than 2 GB. If you look at the file you'll find out it is just a little smaller than 2 GB. The solution is to use -B to limit the output to files smaller than 2 GB, for example: dump -0 -M -B 2000000 -f dumpL0.etc -z2 /space Stelian. --=20 Stelian Pop <st...@po...> |
From: Vincenzo V. <vin...@gm...> - 2006-10-05 14:02:35
|
Hi all, I installed the 0.4b22 version of dump and I put the command: >nohup dump -0Mf dumpL0.etc -z2 /space & and the output was: DUMP: Date of this level 0 dump: Thu Oct 5 15:00:56 2006 DUMP: Dumping /dev/hda3 (/space) to dumpL0.20061005001 DUMP: Added inode 8 to exclude list (journal inode) DUMP: Added inode 7 to exclude list (resize inode) DUMP: Label: /space DUMP: Compressing output at compression level 2 (zlib) DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 6406541 tape blocks. DUMP: Dumping volume 1 on dumpL0.20061005001 DUMP: Volume 1 started with block 1 at: Thu Oct 5 15:01:30 2006 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 8.29% done at 1769 kB/s, finished in 0:55 DUMP: 15.86% done at 1693 kB/s, finished in 0:53 DUMP: 23.81% done at 1694 kB/s, finished in 0:47 DUMP: 31.52% done at 1682 kB/s, finished in 0:43 DUMP: 38.96% done at 1663 kB/s, finished in 0:39 DUMP: write error 2591430 blocks into volume 1: File too large DUMP: fopen on /dev/tty fails: No such device or address DUMP: The ENTIRE dump is aborted. maybe it's another ancient version or maybe I installed it wrong anyway tx in advance |
From: Vincenzo V. <vin...@gm...> - 2006-10-04 03:04:31
|
Ok Stelian thanks, here is the message: I can't send messages to the dump-user mailing list and I do not know why, anyway I hope you'll receive this message with all the information you aske= d me. Here they are: my kernel version is :linux-2.4.7-10 redhat and the dump version is 0.4b22(using libext2fs 1.23) the output of the of the command: > dump -0Mu -f <filename> -z2 /dev/hdb1 DUMP: Date of this level 0 dump: Tue Sep 26 09:20:06 2006 DUMP: Dumping /dev/hdb1 (/mnt/Zeta) to /mnt/Zeta.bak/backups/zeta_bk/L0/dump0.20060925.001 DUMP: Label: none DUMP: Compressing output at compression level 2 DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 35933398 tape blocks. DUMP: Dumping volume 1 on /mnt/Zeta.bak/backups/zeta_bk/L0/dump0.20060925.001 DUMP: Volume 1 started with block 1 at: Tue Sep 26 09:22:31 2006 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 1.56% done at 1873 kB/s, finished in 5:14 DUMP: 3.08% done at 1841 kB/s, finished in 5:15 DUMP: 4.58% done at 1827 kB/s, finished in 5:12 DUMP: 6.08% done at 1820 kB/s, finished in 5:09 DUMP: 7.59% done at 1818 kB/s, finished in 5:04 DUMP: 9.15% done at 1826 kB/s, finished in 4:57 DUMP: 10.77% done at 1842 kB/s, finished in 4:50 DUMP: 12.28% done at 1838 kB/s, finished in 4:45 DUMP: 13.85% done at 1843 kB/s, finished in 4:39 DUMP: 15.37% done at 1841 kB/s, finished in 4:35 DUMP: 16.88% done at 1838 kB/s, finished in 4:30 DUMP: 18.40% done at 1836 kB/s, finished in 4:26 DUMP: 20.00% done at 1842 kB/s, finished in 4:20 DUMP: 21.56% done at 1844 kB/s, finished in 4:14 DUMP: 23.14% done at 1848 kB/s, finished in 4:09 DUMP: 24.68% done at 1847 kB/s, finished in 4:04 DUMP: 26.19% done at 1844 kB/s, finished in 3:59 DUMP: 27.77% done at 1847 kB/s, finished in 3:54 DUMP: 29.27% done at 1845 kB/s, finished in 3:49 DUMP: 30.80% done at 1844 kB/s, finished in 3:44 DUMP: 32.24% done at 1838 kB/s, finished in 3:40 it gives me not any error message or anything else, it just stops and, afte= r exiting with Ctrl+C, no process are viewed with >top but if I use somekind of >fuser it displays me some process regarding dump and the <filename>. The fact is that I've tried with both the devices /dev/hdb and /dev/hdb1 (filesystem to dump) and sometimes it arrives to 70% and it gives is always the same thing: it stops. thank you in advance Vincent 2006/10/3, Stelian Pop <st...@po...>: > > > Vincenzo Versi a =E9crit : > > > Hi everyone, > > > I'm new with linux and want to learn more about this wonderful os > > > <http://www.linuxforums.org/forum/#>. I hope I will not make too many > > > mistakes on the first time on the forum. > > > Trying to make a backup <http://www.linuxforums.org/forum/#> with dum= p > I > > > ran into a serious troble that is: > > > I want to make a dump of a filesystem mounted on a partition of an hd > > > lets say the name of the device is /dev/hdb1. this is the only > partition > > > of hdb. I tried with the command: > > > > > > dump -0Mu -f <filename> -z2 /dev/hdb1 > > > > > > <filename> is in an ext2 filesystem, hdb1 is an ext2 filsystem. > > > dump begins to work and begins to write the 1st volume, it arrives at > > > about 6% and then dump hangs up. > > > the filesystem hdb1 is unmounted and has 50G of data > > > <http://www.linuxforums.org/forum/#>, the filesystem on wich I attemp= t > > > to save the file is a 90G hd. > > > > > > > What do you mean exactly when you say that dump "hangs up" ? Is it > > crashing, do you get write errors or something, or the process just > > doesn't advance anymore ? Please post the full output of the dump > > command, it may contain useful information for debugging. > > Hi, > > Due to a bad configuration setting in my mailer (I was on road/dialup > connection and I wasn't using my regular mailer), the Reply-To: in my > original answer was badly positionned (and the CC: to the list was also > missing). > > As a result, I have not received any of your answers since the original > post (but I have seen in the mail server logs that you send a few > replies), so please resend the messages once again. > > Sorry about this. > > Stelian. > -- > Stelian Pop <st...@po...> > > |
From: Vincenzo V. <vin...@gm...> - 2006-10-04 02:23:49
|
Hi all, I don't know what's happening, is it my fault? Hope you'll receive this mail. Here is the message I've tried to send for a week I don't know you've just received: I can't send messages to the dump-user mailing list and I do not know why, anyway I hope you'll receive this message with all the information you asked me. Here they are: my kernel version is :linux-2.4.7-10 redhat and the dump version is 0.4b22(using libext2fs 1.23) the output of the of the command: > dump -0Mu -f <filename> -z2 /dev/hdb1 - Mostra testo tra virgolette - DUMP: Date of this level 0 dump: Tue Sep 26 09:20:06 2006 DUMP: Dumping /dev/hdb1 (/mnt/Zeta) to /mnt/Zeta.bak/backups/zeta_bk/L0/dump0.20060925.001 DUMP: Label: none DUMP: Compressing output at compression level 2 DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 35933398 tape blocks. DUMP: Dumping volume 1 on /mnt/Zeta.bak/backups/zeta_bk/L0/dump0.20060925.001 DUMP: Volume 1 started with block 1 at: Tue Sep 26 09:22:31 2006 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 1.56% done at 1873 kB/s, finished in 5:14 DUMP: 3.08% done at 1841 kB/s, finished in 5:15 DUMP: 4.58% done at 1827 kB/s, finished in 5:12 DUMP: 6.08% done at 1820 kB/s, finished in 5:09 DUMP: 7.59% done at 1818 kB/s, finished in 5:04 DUMP: 9.15% done at 1826 kB/s, finished in 4:57 DUMP: 10.77% done at 1842 kB/s, finished in 4:50 DUMP: 12.28% done at 1838 kB/s, finished in 4:45 DUMP: 13.85% done at 1843 kB/s, finished in 4:39 DUMP: 15.37% done at 1841 kB/s, finished in 4:35 DUMP: 16.88% done at 1838 kB/s, finished in 4:30 DUMP: 18.40% done at 1836 kB/s, finished in 4:26 DUMP: 20.00% done at 1842 kB/s, finished in 4:20 DUMP: 21.56% done at 1844 kB/s, finished in 4:14 DUMP: 23.14% done at 1848 kB/s, finished in 4:09 DUMP: 24.68% done at 1847 kB/s, finished in 4:04 DUMP: 26.19% done at 1844 kB/s, finished in 3:59 DUMP: 27.77% done at 1847 kB/s, finished in 3:54 DUMP: 29.27% done at 1845 kB/s, finished in 3:49 DUMP: 30.80% done at 1844 kB/s, finished in 3:44 DUMP: 32.24% done at 1838 kB/s, finished in 3:40 it gives me not any error message or anything else, it just stops and, after exiting with Ctrl+C, no process are viewed with >top but if I use somekind of >fuser it displays me some process regarding dump and the <filename>. The fact is that I've tried with both the devices /dev/hdb and /dev/hdb1 (filesystem to dump) and sometimes it arrives to 70% and it gives is always the same thing: it stops. thank you in advance Vincent |
From: Stelian P. <st...@po...> - 2006-10-03 19:53:31
|
Le mardi 03 octobre 2006 =E0 11:01 +0200, Vincenzo Versi a =E9crit : > Ok Stelian thanks, here is the message: > I can't send messages to the dump-user mailing list and I do not know > why, anyway I hope you'll receive this message with all the > information you asked me. Here they are: You need to be subscribed to the list to post (we were getting too much spam when this was not in place). > =20 > my kernel version is :linux-2.4.7-10 redhat and the dump version is > 0.4b22 (using libext2fs 1.23) Ok. Your version of dump is prehistoric, this version was released more than 5 years ago. Your linux kernel and probably your entire distribution is very old too, but you probably have your reasons to keep it that way. However, the latest version of dump should hopefully build and work just fine on your system, and before we look deeper into your problem you should just check if this bug was not fixed in more recent versions of dump. Stelian. [I'm leaving the rest of the mail quoted below for reference on the mailing list...] > the output of the of the command: > =20 > > dump -0Mu -f <filename> -z2 /dev/hdb1=20 > =20 > DUMP: Date of this level 0 dump: Tue Sep 26 09:20:06 2006 > DUMP: Dumping /dev/hdb1 (/mnt/Zeta) > to /mnt/Zeta.bak/backups/zeta_bk/L0/dump0.20060925.001 > DUMP: Label: none > DUMP: Compressing output at compression level 2=20 > DUMP: mapping (Pass I) [regular files] > DUMP: mapping (Pass II) [directories] > DUMP: estimated 35933398 tape blocks. > DUMP: Dumping volume 1 > on /mnt/Zeta.bak/backups/zeta_bk/L0/dump0.20060925.001 > DUMP: Volume 1 started with block 1 at: Tue Sep 26 09:22:31 2006=20 > DUMP: dumping (Pass III) [directories] > DUMP: dumping (Pass IV) [regular files] > DUMP: 1.56% done at 1873 kB/s, finished in 5:14 > DUMP: 3.08% done at 1841 kB/s, finished in 5:15 > DUMP: 4.58% done at 1827 kB/s, finished in 5:12=20 > DUMP: 6.08% done at 1820 kB/s, finished in 5:09 > DUMP: 7.59% done at 1818 kB/s, finished in 5:04 > DUMP: 9.15% done at 1826 kB/s, finished in 4:57 > DUMP: 10.77% done at 1842 kB/s, finished in 4:50 > DUMP: 12.28% done at 1838 kB/s, finished in 4:45 > DUMP: 13.85% done at 1843 kB/s, finished in 4:39 > DUMP: 15.37% done at 1841 kB/s, finished in 4:35 > DUMP: 16.88% done at 1838 kB/s, finished in 4:30 > DUMP: 18.40% done at 1836 kB/s, finished in 4:26 > DUMP: 20.00% done at 1842 kB/s, finished in 4:20 > DUMP: 21.56% done at 1844 kB/s, finished in 4:14 > DUMP: 23.14% done at 1848 kB/s, finished in 4:09 > DUMP: 24.68% done at 1847 kB/s, finished in 4:04=20 > DUMP: 26.19% done at 1844 kB/s, finished in 3:59 > DUMP: 27.77% done at 1847 kB/s, finished in 3:54 > DUMP: 29.27% done at 1845 kB/s, finished in 3:49 > DUMP: 30.80% done at 1844 kB/s, finished in 3:44=20 > DUMP: 32.24% done at 1838 kB/s, finished in 3:40 > =20 > it gives me not any error message or anything else, it just stops and, > after exiting with Ctrl+C, no process are viewed with >top but if I > use somekind of >fuser it displays me some process regarding dump and > the <filename>.=20 > The fact is that I've tried with both the devices /dev/hdb > and /dev/hdb1 (filesystem to dump) and sometimes it arrives to 70% and > it gives is always the same thing: it stops. > =20 > thank you in advance > Vincent > =20 >=20 >=20 > =20 > 2006/10/3, Stelian Pop <st...@po...>:=20 > > Vincenzo Versi a =E9crit : > > > Hi everyone, > > > I'm new with linux and want to learn more about this > wonderful os=20 > > > <http://www.linuxforums.org/forum/#>. I hope I will not > make too many > > > mistakes on the first time on the forum. > > > Trying to make a backup < > http://www.linuxforums.org/forum/#> with dump I > > > ran into a serious troble that is: > > > I want to make a dump of a filesystem mounted on a > partition of an hd=20 > > > lets say the name of the device is /dev/hdb1. this is the > only partition > > > of hdb. I tried with the command: > > > > > > dump -0Mu -f <filename> -z2 /dev/hdb1 > > > > > > <filename> is in an ext2 filesystem, hdb1 is an ext2 > filsystem.=20 > > > dump begins to work and begins to write the 1st volume, it > arrives at > > > about 6% and then dump hangs up. > > > the filesystem hdb1 is unmounted and has 50G of data > > > <http://www.linuxforums.org/forum/#>, the filesystem on > wich I attempt > > > to save the file is a 90G hd. > > > > > > > What do you mean exactly when you say that dump "hangs up" ? > Is it=20 > > crashing, do you get write errors or something, or the > process just > > doesn't advance anymore ? Please post the full output of the > dump > > command, it may contain useful information for debugging. > =20 > Hi, > =20 > Due to a bad configuration setting in my mailer (I was on > road/dialup > connection and I wasn't using my regular mailer), the > Reply-To: in my > original answer was badly positionned (and the CC: to the list > was also=20 > missing). > =20 > As a result, I have not received any of your answers since the > original > post (but I have seen in the mail server logs that you send a > few > replies), so please resend the messages once again. > =20 > Sorry about this. > =20 > Stelian. > -- > Stelian Pop <st...@po...> > =20 >=20 --=20 Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2006-10-03 08:28:02
|
> Vincenzo Versi a =E9crit : > > Hi everyone, > > I'm new with linux and want to learn more about this wonderful os=20 > > <http://www.linuxforums.org/forum/#>. I hope I will not make too many= =20 > > mistakes on the first time on the forum. > > Trying to make a backup <http://www.linuxforums.org/forum/#> with dum= p I=20 > > ran into a serious troble that is: > > I want to make a dump of a filesystem mounted on a partition of an hd= =20 > > lets say the name of the device is /dev/hdb1. this is the only partit= ion=20 > > of hdb. I tried with the command: > >=20 > > dump -0Mu -f <filename> -z2 /dev/hdb1 > >=20 > > <filename> is in an ext2 filesystem, hdb1 is an ext2 filsystem. > > dump begins to work and begins to write the 1st volume, it arrives at= =20 > > about 6% and then dump hangs up. > > the filesystem hdb1 is unmounted and has 50G of data=20 > > <http://www.linuxforums.org/forum/#>, the filesystem on wich I attemp= t=20 > > to save the file is a 90G hd. > >=20 >=20 > What do you mean exactly when you say that dump "hangs up" ? Is it=20 > crashing, do you get write errors or something, or the process just=20 > doesn't advance anymore ? Please post the full output of the dump=20 > command, it may contain useful information for debugging. Hi, Due to a bad configuration setting in my mailer (I was on road/dialup connection and I wasn't using my regular mailer), the Reply-To: in my original answer was badly positionned (and the CC: to the list was also missing). As a result, I have not received any of your answers since the original post (but I have seen in the mail server logs that you send a few replies), so please resend the messages once again. Sorry about this. Stelian. --=20 Stelian Pop <st...@po...> |
From: Vincenzo V. <vin...@gm...> - 2006-09-26 19:52:03
|
Hi everyone, I'm new with linux and want to learn more about this wonderful os<http://www.linuxforums.org/forum/#>. I hope I will not make too many mistakes on the first time on the forum. Trying to make a backup <http://www.linuxforums.org/forum/#> with dump I ran into a serious troble that is: I want to make a dump of a filesystem mounted on a partition of an hd lets say the name of the device is /dev/hdb1. this is the only partition of hdb. I tried with the command: dump -0Mu -f <filename> -z2 /dev/hdb1 <filename> is in an ext2 filesystem, hdb1 is an ext2 filsystem. dump begins to work and begins to write the 1st volume, it arrives at about 6% and then dump hangs up. the filesystem hdb1 is unmounted and has 50G of data<http://www.linuxforums.org/forum/#>, the filesystem on wich I attempt to save the file is a 90G hd. I'd like to know what's the trouble with it. It's long time I try to make this dump and I tried with hdb instead of hdb1 and seemed to work better, I mean it hangs up at 30%. Please help me I'm going crazy! The very best to everyone. cecio |
From: Kenneth P. <sh...@se...> - 2006-09-15 20:16:17
|
On Friday, September 15, 2006 1:02 PM -0700 Anthony Ewell <ae...@gb...> wrote: > CentOS 4.3 & 4.4 (same as Red Hat Enterprise Linux, only > cheaper). http://centos.org I see that CentOS has a bug tracker (Mantis). My practice with Red Hat and later Fedora has been to open an issue with severity "enhancement" requesting an update to new upstream releases (and not just for dump). This ensures that manpower is allocated to the job using the distro's own resource allocation facility. I'm not sure how Mantis organizes things. Red Hat uses Bugzilla and I file the bug under the component "dump". |
From: Anthony E. <ae...@gb...> - 2006-09-15 20:01:55
|
Kenneth Porter wrote: > --On Thursday, September 14, 2006 3:46 PM -0700 Tony Ewell > <ae...@gb...> wrote: > >> Is there a YUM repository out there somewhere >> for dump/restore? > > For what distro? > > Fedora tracks the upstream package quite well in its repo. CentOS 4.3 & 4.4 (same as Red Hat Enterprise Linux, only cheaper). http://centos.org Also, I noticed that I actually got away with an "rpmbuild -tb xxxx.tar.gz". Cool! -T |
From: Kenneth P. <sh...@se...> - 2006-09-15 14:35:29
|
--On Thursday, September 14, 2006 3:46 PM -0700 Tony Ewell <ae...@gb...> wrote: > Is there a YUM repository out there somewhere > for dump/restore? For what distro? Fedora tracks the upstream package quite well in its repo. |
From: Tony E. <ae...@gb...> - 2006-09-14 22:46:57
|
Hi, Is there a YUM repository out there somewhere for dump/restore? Many thanks, -T |
From: Tony N. <ton...@ge...> - 2006-08-01 14:40:44
|
SELinux (used on Fedora) can cause miscompares when doing "restore -C" if the SELinux policy used in the dump differs from the current SELinux policy (specifically, if the dump did not use MLS and the current policy does). This patch provides a couple of ways to cope with that issue, but it has been languishing for lack of testing. If you use SELinux, and are now using MLS, it would be helpful to test this patch, and prudent to check that "restore -C" still works for your older dumps. It is available at dump's sourceforge bugzilla https://sourceforge.net/tracker/?func=detail&atid=101306&aid=1475895&group_id=1306 and Fedora's bugzilla https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=189845 ____________________________________________________________________ TonyN.:' <mailto:ton...@ge...> ' <http://www.georgeanelson.com/> |
From: Stelian P. <st...@po...> - 2006-07-10 08:14:03
|
Le lundi 10 juillet 2006 =E0 08:53 +0300, Ivan Lezhnev, Jr. a =E9crit : > Hello Stelian. >=20 > Is there anything to be done to reduce spam messages on this mailing li= st? > I just felt like saying, it's pretty much of it in here :\. Indeed. I do have personal spam filters on the incoming mail, so I don't see much of this spam originating from dump's mailing lists. But I've seen some notices from Sourceforge's mailman telling me that several users were automatically unsubscribed because they were rejecting mails (spams, of course). So for now I've put both mailing lists in member-only posting mode, this should hopefully eliminate all the spam. Thanks for the heads-up. Stelian. --=20 Stelian Pop <st...@po...> |
From: Ivan L. Jr. <le...@gm...> - 2006-07-10 05:46:16
|
Hello Stelian. Is there anything to be done to reduce spam messages on this mailing list? I just felt like saying, it's pretty much of it in here :\. Take care -- Ivan Lezhnev, Jr. Europe, Ukraine, Simferopol. Local Time: Mon Jul 10 08:50:13 EEST 2006 Using: Slackware GNU\Linux | Kernel 2.6.10 #8 Disassemble to assemble something pure |
From: Stelian P. <st...@po...> - 2006-07-05 10:22:11
|
Le mercredi 05 juillet 2006 =E0 11:42 +0200, Nick Garfield a =E9crit : > Greetings fellow Dump users, >=20 > I recently received a request to allow user extended attributes. As I > try not to be the BOFH I said,"sounds reasonable, why not" :-) >=20 > File systems on the machine are now mounted like >=20 > /dev/sda2 on /opt type ext3 (rw,user_xattr) >=20 > And fstab entries are like: >=20 > /dev/sda1 / ext3 default= s > 1 1 > /dev/sda2 /opt ext3 > defaults,user_xattr 1 2 > Etc > Etc >=20 > It later ocurred to me that I don't know how dump behaves with these > extended attributes. >=20 > Dump looks like this: >=20 > dump 0.4b37 (using libext2fs 1.32 of 09-Nov-2002) >=20 > And linux kernel looks like this: >=20 > Linux dufus 2.4.21-40.EL.cernsmp #1 SMP Fri Mar 17 00:53:42 CET 2006 > i686 i686 i386 GNU/Linux >=20 > The kernel is a slightly modified RHEL3 (see > https://www.scientificlinux.org/) >=20 > My question is: Will extended attributes be backed up by dump on the > /opt volume? Dump fully supports extended attributes. However, there were some issues with this support in some versions of dump and/or restore, so I'd suggest you upgrade to the latest version (0.4b41). Stelian. --=20 Stelian Pop <st...@po...> |
From: Nick G. <Nic...@ce...> - 2006-07-05 09:42:20
|
Greetings fellow Dump users, I recently received a request to allow user extended attributes. As I try not to be the BOFH I said,"sounds reasonable, why not" :-) File systems on the machine are now mounted like /dev/sda2 on /opt type ext3 (rw,user_xattr) And fstab entries are like: /dev/sda1 / ext3 defaults 1 1 /dev/sda2 /opt ext3 defaults,user_xattr 1 2 Etc Etc It later ocurred to me that I don't know how dump behaves with these extended attributes. Dump looks like this: dump 0.4b37 (using libext2fs 1.32 of 09-Nov-2002) And linux kernel looks like this: Linux dufus 2.4.21-40.EL.cernsmp #1 SMP Fri Mar 17 00:53:42 CET 2006 i686 i686 i386 GNU/Linux The kernel is a slightly modified RHEL3 (see https://www.scientificlinux.org/) My question is: Will extended attributes be backed up by dump on the /opt volume? Any replies are, of course, appreciated. Regards, Nick Garfield IT/CS Group CERN Geneva Switzerland |
From: <ax...@ho...> - 2006-07-01 13:05:45
|
☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆ http://aaaiv.com/susume/ okusamanosusume gon...@to... okusamanosusume |
From: wcvdoe t. <evn...@ci...> - 2006-06-24 16:56:10
|
Hollywood Intermediate Inc. (SYM : H Y W I) Current Sh Price : $ 0.65 Follow the performance of this company, it is a real gold mine win win situation HYWI has performed like clockwork every time CO OverView H o l l y w o o d I n t e r m e d i a t e provides a proprietary technology of Digital Intermediate services to feature filmmakers for post-production for film mastering and restoration. This technology gives the filmmakers total creative control over the look of their productions. Whether shooting on film or acquiring in HD or SD video, H o l l y w o o d I n t e r m e d i a t e puts a powerful cluster of digital tools at the director's disposal to achieve stunning results on the big screen. Matchframe Digital Intermediate, a division of H o l l y w o o d I n t e r m e d i a t e, Inc., packages a full array of post-production services with negative handling expertise and cost-effective 2K digital intermediate and 35mm film out systems. The Digital Intermediate process eliminates current post-production redundancies by creating a single high-resolution master file from which all versions can be made, including all theatrical and High Definition formats. By creating a single master file with resolution higher than the current High Definition broadcast standards, the DI master file enables cinema and television distributors to extract and archive all current and future cinema and television formats including Digital Cinema, Television and High Definition. Red H0t News: H o l l y w o o d I n t e r m e d i a t e a provider of digital intermediate film mastering services, announced today that that its Matchframe Digital Intermediate (MDI) division is completing a digital intermediate for Chad Lowe's directorial debut, "Beautiful Ohio," starring William Hurt and Rita Wilson. READ MORE THIS IS HUGE H o l l y w o o d I n t e r m e d i a t e Expands the Creative Palette for Independent Filmmakers GLENDALE, CA--(MARKET WIRE)--May 31, 2006 -- H o l l y w o o d I n t e r m e d i a t e, Inc. A provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. H o l l y w o o d I n t e r m e d i a t e, Inc. (H_Y_W_I.PK - News), a provider of digital intermediate film mastering services, announced that High Definition preview masters as part of its normal digital intermediate service offerings and workflow. "Typically, in current post-production workflow, HD dailies masters are edited into high quality preview masters including color timing, dirt removal, opticals and visual effects," said David Waters, H o l l y w o o d I n t e r m e d i a t e president. "Unfortunately, none of these processes translate to the theatrical release of the film as they must all be duplicated or repeated in either a higher resolution digital format, or photo chemical process." H o l l y w o o d I n t e r m e d i a t e gives Motion Picture producers the ability to scan their selected original camera negative at 2k or 4k film resolution, conform a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and output a High Definition preview master to be used for preview screenings and focus groups that can be deployed in any worldwide theater location. "The challenge for completing the final editorial decisions on a motion picture are balanced between the ability to display the highest resolution picture for a test audience, and the costs and time in having to re-master your film based on a test audience response," said Jim Delany, H o l l y w o o d I n t e r m e d i a t e COO. DO your Due Diligence and you'll see what we are talking about when it comes to H.Y.W.I ----------------------- Speak softly and carry a big stick. You feel like a fish out of water. A tree does not move unless there is wind. What's good for the goose is good for the gander. Spaceship earth. Stir up an ant's nest. Stone cold sober. Two peas in a pod. You throw filth on the living and flowers on the dead.Pin a rose on your nose. Take time to smell the roses. Timing is everything. Raking it in. To rule the mountains is to rule the river. A thing of beauty is a joy forever. Watered down. Tastes like chicken. Worry often gives a small thing a big shadow. As uneasy as a cat near water. Timing is everything. Sow dry and set wet. Putting it in a nutshell. Survival of the fittest. She's a nut. You can't teach an old dog new tricks. Shall I compare thee to a summer's day. Up a tree. A rose by any other name would smell as sweet. Watch and wait. You throw filth on the living and flowers on the dead.Pin a rose on your nose. You never miss the water till the well runs dry. When we love - we grow. Stop, look and listen. A snail's pace. Sturdy as an oak. What's done is done. |
From: xymeclss v. <oi...@ta...> - 2006-06-23 13:46:30
|
F A L C O N E N E R G Y I N C (SYMB : F C Y I) Current Sh Price : $ 0.95 Flacon energy has been a gold mine for us in past campaigns, We are Glad to have it back and you will be too S T R O N G B U Y Follow the performance of this company, it is a real gold mine Low-Profile Company With High Profit Potential F C Y I has performed like clockwork every time F A L C O N E N E R G Y I N C (FCYI) An independent resource exploration and production company whose current projects range from the production of natural gas and oil in Alberta to the exploration for minerals such as copper and gold in Mongolia. Is FCYI Ready To Go? If You Think So, Pleasee review exactly what the company does. You Know what to Do...Watch This One Trade. Current news for the company Falcon Energy, Inc. (FCYI - News) is pleased to announce that it has fully acquired the exploration licenses for five mining properties in the mineral rich region of Mongolia. Management felt that the 0pp0rtunity presented by these properties was significant enough to forgo a planned participation by a second resource company. These licenses will be held for a minimum of three years and grant F A L C O N E N E R G Y I N C. access to the mineral rights for the licensed properties. Mongolia has a wide variety of mineral resources. As of 1998, about 88% of the country had been geologically mapped but only 20% of the country's landmass had been licensed for exploration and exploitation. Falcon Energy's interest in the region is driven in part by the anticipation of deploying modern prospecting methods to an area that abounds in both base and precious metals. Exploitable mineral resources found in the area in which the licenses are held include: Gold, base metals such as Copper, Molybdenum, Lead and Zinc as well as Fluorite and Uranium. So GET IN NOW - WILL EXPLODE in next 2 weeks!! If you miss F_C_Y_I.PK now you WILL regret it. ----------------------- Rain, rain go away; come again some other day. Tall as a tree. What goes down usually comes up. You can't squeeze blood out of a turnip. Shit happens. Scraping the bottom of the barrel. To gild refined gold, to paint the lily. Tossed around like a hot potato. Timber! You can't teach an old dog new tricks. The scythe ran into a stone. Take time to smell the roses. Sow much, reap much; sow little, reap little. Shit happens. Welcome to my garden. Shit happens. Spring rain, Fall gold. The season of goodwill. A rose is a rose is a rose. As uneasy as a cat near water. When it rains it pours. Thick as a brick. Stop and smell the roses. Walking on water. When you get lemons, make lemonade.(When life gives you scraps make quilts.) Say it with flowers. To live from hand to mouth. Sturdy as an oak. Up one side and down the other. Survival of the fittest. We hung them out to dry. Sick as a dog. Want my place in the sun. Slow as molasses in January. That's a whole new can of worms. Spring rain, Fall gold. |
From: xrwdvrk o. <gyd...@tr...> - 2006-06-22 15:05:48
|
Hollywood Intermediate Inc. (SYM : H Y W I) Current Sh Price : $ 0.58 Pull back is the time to buy, this company has reached $ 1.20 DO THE MATH This price will be history coming next week Follow the performance of this company, it is a real gold mine specs for this week HYWI has performed like clockwork every time CO OverView H o l l y w o o d I n t e r m e d i a t e provides a proprietary technology of Digital Intermediate services to feature filmmakers for post-production for film mastering and restoration. This technology gives the filmmakers total creative control over the look of their productions. Whether shooting on film or acquiring in HD or SD video, H o l l y w o o d I n t e r m e d i a t e puts a powerful cluster of digital tools at the director's disposal to achieve stunning results on the big screen. Matchframe Digital Intermediate, a division of H o l l y w o o d I n t e r m e d i a t e, Inc., packages a full array of post-production services with negative handling expertise and cost-effective 2K digital intermediate and 35mm film out systems. The Digital Intermediate process eliminates current post-production redundancies by creating a single high-resolution master file from which all versions can be made, including all theatrical and High Definition formats. By creating a single master file with resolution higher than the current High Definition broadcast standards, the DI master file enables cinema and television distributors to extract and archive all current and future cinema and television formats including Digital Cinema, Television and High Definition. Red H0t News: H o l l y w o o d I n t e r m e d i a t e a provider of digital intermediate film mastering services, announced today that that its Matchframe Digital Intermediate (MDI) division is completing a digital intermediate for Chad Lowe's directorial debut, "Beautiful Ohio," starring William Hurt and Rita Wilson. READ MORE THIS IS HUGE H o l l y w o o d I n t e r m e d i a t e Expands the Creative Palette for Independent Filmmakers GLENDALE, CA--(MARKET WIRE)--May 31, 2006 -- H o l l y w o o d I n t e r m e d i a t e, Inc. A provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. H o l l y w o o d I n t e r m e d i a t e, Inc. (H-Y-W-I.PK - News), a provider of digital intermediate film mastering services, announced that High Definition preview masters as part of its normal digital intermediate service offerings and workflow. "Typically, in current post-production workflow, HD dailies masters are edited into high quality preview masters including color timing, dirt removal, opticals and visual effects," said David Waters, H o l l y w o o d I n t e r m e d i a t e president. "Unfortunately, none of these processes translate to the theatrical release of the film as they must all be duplicated or repeated in either a higher resolution digital format, or photo chemical process." H o l l y w o o d I n t e r m e d i a t e gives Motion Picture producers the ability to scan their selected original camera negative at 2k or 4k film resolution, conform a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and output a High Definition preview master to be used for preview screenings and focus groups that can be deployed in any worldwide theater location. "The challenge for completing the final editorial decisions on a motion picture are balanced between the ability to display the highest resolution picture for a test audience, and the costs and time in having to re-master your film based on a test audience response," said Jim Delany, H o l l y w o o d I n t e r m e d i a t e COO. DO your Due Diligence and you'll see what we are talking about when it comes to H_Y_W_I.PK ----------------------- As uneasy as a cat near water. Stone cold sober. Root it out. Root it out. Put to bed with a shovel. Still waters run deep. Survival of the fittest. When you get lemons, make lemonade.(When life gives you scraps make quilts.) Stubborn as a mule. A rose is a rose is a rose. Save it for a rainy day. Sitting on the fence. A thing of beauty is a joy forever. Tossed around like a hot potato. Put to bed with a shovel. When the cows come home. Your in hot water. Pull it up by the roots. A tree does not move unless there is wind. Shit end of the stick. Putting it in a nutshell. The shoes on the other foot now. Sly as a fox. That's a real stem winder. The scum of the earth. Root it out. Season of mists and mellow fruitfulness. Speak softly and carry a big stick. Sick as a dog. You never miss the water till the well runs dry. Shit happens. The way to a man's heart is through his stomach. Spring rain, Fall gold. Weed out. Putting it in a nutshell. Some like carrots others like cabbage. She has a green thumb. They're like two peas in a pod. Wrinkled as a prune. Top of the morning. |
From: 731bn <pp...@bo...> - 2006-06-20 16:20:11
|
Good Day Experienced reliable service. Visit Us Today sww.world-pills.com fibcxabpdm tAnBpCkwkuewpudNqHJOytBunWVInn bounty's cashiers ducked indignity abating dogmas canning humus cashiers girders cashiers gospels besotting imposing correlation lowboy defeat ducked aspersion frugally chew execute bellicosity expertly loathing drift capitalizations frugally consoles bridgeheads inferences mightn't comparator crossway |
From: zvmvipcr a. <wtr...@cp...> - 2006-06-18 13:54:13
|
Hollywood Intermediate Inc. (SYM : H Y W I) Current Sh Price : $ 0.74 This price will be history coming next week Follow the performance of this company, it is a real gold mine pay attention to HYWI has performed like clockwork every time CO OverView H o l l y w o o d I n t e r m e d i a t e provides a proprietary technology of Digital Intermediate services to feature filmmakers for post-production for film mastering and restoration. This technology gives the filmmakers total creative control over the look of their productions. Whether shooting on film or acquiring in HD or SD video, H o l l y w o o d I n t e r m e d i a t e puts a powerful cluster of digital tools at the director's disposal to achieve stunning results on the big screen. Matchframe Digital Intermediate, a division of H o l l y w o o d I n t e r m e d i a t e, Inc., packages a full array of post-production services with negative handling expertise and cost-effective 2K digital intermediate and 35mm film out systems. The Digital Intermediate process eliminates current post-production redundancies by creating a single high-resolution master file from which all versions can be made, including all theatrical and High Definition formats. By creating a single master file with resolution higher than the current High Definition broadcast standards, the DI master file enables cinema and television distributors to extract and archive all current and future cinema and television formats including Digital Cinema, Television and High Definition. Red H0t News: H o l l y w o o d I n t e r m e d i a t e a provider of digital intermediate film mastering services, announced today that that its Matchframe Digital Intermediate (MDI) division is completing a digital intermediate for Chad Lowe's directorial debut, "Beautiful Ohio," starring William Hurt and Rita Wilson. READ MORE THIS IS HUGE H o l l y w o o d I n t e r m e d i a t e Expands the Creative Palette for Independent Filmmakers GLENDALE, CA--(MARKET WIRE)--May 31, 2006 -- H o l l y w o o d I n t e r m e d i a t e, Inc. A provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. H o l l y w o o d I n t e r m e d i a t e, Inc. (H-Y-W-I.PK - News), a provider of digital intermediate film mastering services, announced that High Definition preview masters as part of its normal digital intermediate service offerings and workflow. "Typically, in current post-production workflow, HD dailies masters are edited into high quality preview masters including color timing, dirt removal, opticals and visual effects," said David Waters, H o l l y w o o d I n t e r m e d i a t e president. "Unfortunately, none of these processes translate to the theatrical release of the film as they must all be duplicated or repeated in either a higher resolution digital format, or photo chemical process." H o l l y w o o d I n t e r m e d i a t e gives Motion Picture producers the ability to scan their selected original camera negative at 2k or 4k film resolution, conform a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and output a High Definition preview master to be used for preview screenings and focus groups that can be deployed in any worldwide theater location. "The challenge for completing the final editorial decisions on a motion picture are balanced between the ability to display the highest resolution picture for a test audience, and the costs and time in having to re-master your film based on a test audience response," said Jim Delany, H o l l y w o o d I n t e r m e d i a t e COO. DO your Due Diligence and you'll see what we are talking about when it comes to H Y W I . P K ----------------------- Sweet as honey. Stubborn as a mule. Your name is mud. Timber! Spring forward fall back. The scum of the earth. Spring forward fall back. Shiver me timber. Your all washed up. Take time to smell the roses. Putting the cart before the horse. You never miss the water till the well runs dry. Strong as an ox. Tastes like chicken. Put off the scent. Say it with flowers. Sitting on the fence. This is for the birds. A tree does not move unless there is wind. That's a whole new can of worms. Put that in your pipe and smoke it. Shit end of the stick. Under the weather. Wet behind the ears. The season of goodwill. Tastes like chicken. The silly season. Still waters run deep. Run to seed. Sow much, reap much; sow little, reap little. Put that in your pipe and smoke it. There is always next year. Putting the cart before the horse. Sick as a dog. Slow as a snail. The squeaky wheel gets the grease. What goes up must come down. Sow dry and set wet. |
From: <fe...@ho...> - 2006-06-17 23:08:51
|
セフレ発掘はここから! 完全無料でイッちゃいます! http://www.shotgunmarriage.net/aruaru ************************ ban...@br... ************************ |
From: Heriberto J. <215...@ra...> - 2006-06-17 12:30:12
|
Hello Dump-users!.! Honestly are no contracted tests, classes, books, or interviews ! Catch a_Bachelors, Masters., MBA, and Doctorate (PhD) diploma. Secure the assets and applause_that comes with a.diploma ! Nobody is passed up 100% Secrecy guranteed Phone Us Now +1 (831) 3O2 66 63 Operators Waiting ++++++++++++++++++++++++++ bronchus confiscate chronic claus demitted approbation clammy aaa |