kosmosfs-users Mailing List for Kosmos File System (KFS)
Status: Alpha
Brought to you by:
sriramsrao
You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(4) |
Nov
(1) |
Dec
(12) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(9) |
Feb
(12) |
Mar
(11) |
Apr
(13) |
May
(11) |
Jun
(4) |
Jul
(7) |
Aug
(21) |
Sep
(58) |
Oct
(11) |
Nov
(39) |
Dec
(36) |
2009 |
Jan
(27) |
Feb
(19) |
Mar
(7) |
Apr
(116) |
May
(36) |
Jun
(16) |
Jul
(8) |
Aug
(19) |
Sep
(5) |
Oct
(1) |
Nov
(2) |
Dec
|
2010 |
Jan
|
Feb
(2) |
Mar
(2) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(2) |
2011 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(2) |
May
(15) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: jidalyg_8711 <jid...@16...> - 2011-08-26 08:46:06
|
hi, I have used kfs for a long time, and it is very stable. But recently all the KFS's chunkservers always crashed at the same time, and the core dump 's backtrace is : #0 0x0000003160e30215 in raise () from /lib64/libc.so.6 #1 0x0000003160e31cc0 in abort () from /lib64/libc.so.6 #2 0x0000003160e29696 in __assert_fail () from /lib64/libc.so.6 #3 0x00000000004caf27 in KFS::ChunkManager::ReadChunkDone (this=0x7d0f40, op=0x1d96bcb0) at /home/yangguang/kfs-0.5.lyg/src/cc/chunk/ChunkManager.cc:1593 #4 0x0000000000503b6e in KFS::ReadOp::HandleDone (this=0x1d96bcb0, code=4, data=0x43263d20) at /home/yangguang/kfs-0.5.lyg/src/cc/chunk/KfsOps.cc:891 #5 0x00000000004be94e in KFS::ObjectMethod<KFS::ReadOp>::execute (this=0x1d96bcb8, code=4, data=0x43263d20) at /home/yangguang/kfs-0.5.lyg/src/cc/libkfsIO/KfsCallbackObj.h:88 #6 0x00000000004bab84 in KFS::KfsCallbackObj::HandleEvent (this=0x1d96bcb0, code=4, data=0x43263d20) at /home/yangguang/kfs-0.5.lyg/src/cc/libkfsIO/KfsCallbackObj.h:129 #7 0x00000000004ed32e in KFS::DiskIo::IoCompletion (this=0x1d90c5b0, inBufferPtr=0x43263d20, inRetCode=0, inSyncFlag=false) at /home/yangguang/kfs-0.5.lyg/src/cc/chunk/DiskIo.cc:1185 #8 0x00000000004eda07 in KFS::DiskIo::RunCompletion (this=0x1d90c5b0) at /home/yangguang/kfs-0.5.lyg/src/cc/chunk/DiskIo.cc:1170 #9 0x00000000004f3273 in KFS::DiskIoQueues::RunCompletion (this=0x1d8322f0) at /home/yangguang/kfs-0.5.lyg/src/cc/chunk/DiskIo.cc:271 #10 0x00000000004f32ab in KFS::DiskIoQueues::Timeout (this=0x1d8322f0) at /home/yangguang/kfs-0.5.lyg/src/cc/chunk/DiskIo.cc:539 #11 0x0000000000547eb2 in KFS::ITimeout::TimerExpired (this=0x1d8322f0, nowMs=1313941877387) at /home/yangguang/kfs-0.5.lyg/src/cc/libkfsIO/ITimeout.h:92 #12 0x000000000054618e in KFS::NetManager::MainLoop (this=0x7d1e00) at /home/yangguang/kfs-0.5.lyg/src/cc/libkfsIO/NetManager.cc:409 #13 0x00000000004e0f3d in netWorker (dummy=0x0) at /home/yangguang/kfs-0.5.lyg/src/cc/chunk/ChunkServer.cc:51 #14 0x0000003161a06367 in start_thread () from /lib64/libpthread.so.0 #15 0x0000003160ed2f7d in clone () from /lib64/libc.so.6 I have no idea about this bug ,and anybody help me ? Thanks yaronli 2011-08-26 jidalyg_8711 |
From: Sriram R. <sri...@gm...> - 2011-05-16 03:30:48
|
Awesome. Thanks. Can you please use the google-groups for this project? Thanks. Sriram On Sun, May 15, 2011 at 3:34 PM, Tung Nguyen <bet...@ya...> wrote: > Hi all, > I solved the problem. I forgot to replace the old version of kfs jar in the > lib directory of hadoop. Basically, the kfs in hadoop is the old version > (0.2.2). I used the new version of KFS. I hope this is helpful for someone > who experiences the same problem. > Thanks > TT > > --- On *Mon, 5/9/11, Tung Nguyen <bet...@ya...>* wrote: > > > From: Tung Nguyen <bet...@ya...> > > Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in > Hadoop > To: "Sriram Rao" <sri...@gm...> > > Cc: "kos...@li..." < > kos...@li...> > Date: Monday, May 9, 2011, 10:26 AM > > > Hi Sriram, > I copied all file in the kfs/build/lib dir (except the static folder) and > issued > #bin/hadoop fs -fs kfs://localhost:20000 -ls / > I got > ls: Can not create a Path from an empty string > Usage: java FsShell [-ls <path>] > > But I did have kfs running: > ./kfsshell -s localhost -p 20000 > KfsShell> ls > dumpster > test > KfsShell> > TT > > --- On *Mon, 5/9/11, Sriram Rao <sri...@gm...>* wrote: > > > From: Sriram Rao <sri...@gm...> > Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in > Hadoop > To: "Tung Nguyen" <bet...@ya...> > Cc: "kos...@li..." < > kos...@li...> > Date: Monday, May 9, 2011, 12:42 AM > > You are missing libkfs_access.so > > can you copy this file over? > > > Sriram > > On Sunday, May 8, 2011, Tung Nguyen <bet...@ya...> wrote: > > Hi Sriram, > > Thank you for your reply. Actually, in the > http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop, there's an > instruction about LD_LIBRARY_PATH. I followed that so I have LD_LIBRARY_PATH > point to kfs/lib.I also exported it out$ echo $LD_LIBRARY_PATH > /home/mine/kfs-0.5/build/lib > > Anyway, I tried to copy that lib to the Hadoop native lib directory: ls > lib/native/Linux-amd64-64/libhadoop.a libhadoop.so > libhadoop.so.1.0.0libhadoop.la libhadoop.so.1 libkfsClient.so > > I still got the same error: java.lang.UnsatisfiedLinkError: no kfs_access > in java.library.path at > > java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at > java.lang.Runtime.loadLibrary0(Runtime.java:823) at > java.lang.System.loadLibrary(System.java:1028) at > org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103)....ThanksTT > > > > --- On Sun, 5/8/11, Sriram Rao <sri...@gm...> wrote: > > > > From: Sriram Rao <sri...@gm...> > > Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in > Hadoop > > To: "Tung Nguyen" <bet...@ya...> > > Cc: kos...@li... > > Date: Sunday, May 8, 2011, 2:46 PM > > > > > > The bit you missed: > > Update the LD_LIBRARY_PATH environment variable so that libkfsClient.so > can be loaded > > > > The easiest way to fix this is to copy the libkfsaccess.so from build/lib > > to: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > > > > > > Sriram > > > > > > On Sun, May 8, 2011 at 8:22 AM, Tung Nguyen <bet...@ya...> wrote: > > > > Hi,I tried to use Hadoop with KFS. I followed the instruction at > http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop > > but when I issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I got the > following error. I used hadoop-0.20.0 and KFS-0.5 > > java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path > > at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) > at java.lang.Runtime.loadLibrary0(Runtime.java:823) at > java.lang.System.loadLibrary(System.java:1028) > > at > org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103) > > at org.apache.hadoop.fs.kfs.KFSImpl.<init>(KFSImpl.java:45) > > at > org.apache.hadoop.fs.kfs.KosmosFileSystem.initialize(KosmosFileSystem.java:70) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > > at > org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at > org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) > at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) at > org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) > > at > > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at > > org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) > > Unable to load kfs_access native library: > /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > > I am sure that KFS and Hadoop are working fine. I used kfsshell to create > dir, copy file to kfs and hdfs without any problem. > > In the /home/mine/hadoop-0.20.2/lib/native/Linux-amd64-64 dir I > have:-rw-r--r-- 1 mine mine 115416 2010-02-19 03:07 libhadoop.a-rw-r--r-- 1 > mine mine 904 2010-02-19 03:07 libhadoop.la > > -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so-rw-r--r-- 1 > mine mine 75669 2010-02-19 03:07 libhadoop.so.1-rw-r--r-- 1 mine mine > 75669 2010-02-19 03:07 > > libhadoop.so.1.0.0 > > Please help me out.Thank youTT > > > ------------------------------------------------------------------------------ > > > > WhatsUp Gold - Download Free Network Management Software > > > > The most intuitive, comprehensive, and cost-effective > > network > > > > management toolset available today. Delivers lowest initial > > > > acquisition cost and overall TCO of any competing solution. > > > > http://p.sf.net/sfu/whatsupgold-sd > > _______________________________________________ > > > > Kosmosfs-users mailing list > > > > Kos...@li... > > > > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > > > > > > > > > > > > -----Inline Attachment Follows----- > > > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > > -----Inline Attachment Follows----- > > > _______________________________________________ > Kosmosfs-users mailing list > Kos...@li...<http://mc/compose?to=Kos...@li...> > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > |
From: Tung N. <bet...@ya...> - 2011-05-15 22:34:25
|
Hi all,I solved the problem. I forgot to replace the old version of kfs jar in the lib directory of hadoop. Basically, the kfs in hadoop is the old version (0.2.2). I used the new version of KFS. I hope this is helpful for someone who experiences the same problem. Thanks TT --- On Mon, 5/9/11, Tung Nguyen <bet...@ya...> wrote: From: Tung Nguyen <bet...@ya...> Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in Hadoop To: "Sriram Rao" <sri...@gm...> Cc: "kos...@li..." <kos...@li...> Date: Monday, May 9, 2011, 10:26 AM Hi Sriram,I copied all file in the kfs/build/lib dir (except the static folder) and issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I gotls: Can not create a Path from an empty stringUsage: java FsShell [-ls <path>] But I did have kfs running: ./kfsshell -s localhost -p 20000KfsShell> lsdumpstertestKfsShell>TT --- On Mon, 5/9/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in Hadoop To: "Tung Nguyen" <bet...@ya...> Cc: "kos...@li..." <kos...@li...> Date: Monday, May 9, 2011, 12:42 AM You are missing libkfs_access.so can you copy this file over? Sriram On Sunday, May 8, 2011, Tung Nguyen <bet...@ya...> wrote: > Hi Sriram, > Thank you for your reply. Actually, in the http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop, there's an instruction about LD_LIBRARY_PATH. I followed that so I have LD_LIBRARY_PATH point to kfs/lib.I also exported it out$ echo $LD_LIBRARY_PATH /home/mine/kfs-0.5/build/lib > Anyway, I tried to copy that lib to the Hadoop native lib directory: ls lib/native/Linux-amd64-64/libhadoop.a libhadoop.so libhadoop.so.1.0.0libhadoop.la libhadoop.so.1 libkfsClient.so > I still got the same error: java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path at > java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103)....ThanksTT > > --- On Sun, 5/8/11, Sriram Rao <sri...@gm...> wrote: > > From: Sriram Rao <sri...@gm...> > Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in Hadoop > To: "Tung Nguyen" <bet...@ya...> > Cc: kos...@li... > Date: Sunday, May 8, 2011, 2:46 PM > > > The bit you missed: > Update the LD_LIBRARY_PATH environment variable so that libkfsClient.so can be loaded > > The easiest way to fix this is to copy the libkfsaccess.so from build/lib > to: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > > > Sriram > > > On Sun, May 8, 2011 at 8:22 AM, Tung Nguyen <bet...@ya...> wrote: > > Hi,I tried to use Hadoop with KFS. I followed the instruction at http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop > but when I issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I got the following error. I used hadoop-0.20.0 and KFS-0.5 > java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path > at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) > at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103) > at org.apache.hadoop.fs.kfs.KFSImpl.<init>(KFSImpl.java:45) > at org.apache.hadoop.fs.kfs.KosmosFileSystem.initialize(KosmosFileSystem.java:70) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) at org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) > at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at > org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) > Unable to load kfs_access native library: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > I am sure that KFS and Hadoop are working fine. I used kfsshell to create dir, copy file to kfs and hdfs without any problem. > In the /home/mine/hadoop-0.20.2/lib/native/Linux-amd64-64 dir I have:-rw-r--r-- 1 mine mine 115416 2010-02-19 03:07 libhadoop.a-rw-r--r-- 1 mine mine 904 2010-02-19 03:07 libhadoop.la > -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 > libhadoop.so.1.0.0 > Please help me out.Thank youTT > ------------------------------------------------------------------------------ > > WhatsUp Gold - Download Free Network Management Software > > The most intuitive, comprehensive, and cost-effective > network > > management toolset available today. Delivers lowest initial > > acquisition cost and overall TCO of any competing solution. > > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > > Kosmosfs-users mailing list > > Kos...@li... > > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > > > > -----Inline Attachment Follows----- ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd -----Inline Attachment Follows----- _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users |
From: Tung N. <bet...@ya...> - 2011-05-09 14:26:46
|
Hi Sriram,I copied all file in the kfs/build/lib dir (except the static folder) and issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I gotls: Can not create a Path from an empty stringUsage: java FsShell [-ls <path>] But I did have kfs running: ./kfsshell -s localhost -p 20000KfsShell> lsdumpstertestKfsShell>TT --- On Mon, 5/9/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in Hadoop To: "Tung Nguyen" <bet...@ya...> Cc: "kos...@li..." <kos...@li...> Date: Monday, May 9, 2011, 12:42 AM You are missing libkfs_access.so can you copy this file over? Sriram On Sunday, May 8, 2011, Tung Nguyen <bet...@ya...> wrote: > Hi Sriram, > Thank you for your reply. Actually, in the http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop, there's an instruction about LD_LIBRARY_PATH. I followed that so I have LD_LIBRARY_PATH point to kfs/lib.I also exported it out$ echo $LD_LIBRARY_PATH /home/mine/kfs-0.5/build/lib > Anyway, I tried to copy that lib to the Hadoop native lib directory: ls lib/native/Linux-amd64-64/libhadoop.a libhadoop.so libhadoop.so.1.0.0libhadoop.la libhadoop.so.1 libkfsClient.so > I still got the same error: java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path at > java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103)....ThanksTT > > --- On Sun, 5/8/11, Sriram Rao <sri...@gm...> wrote: > > From: Sriram Rao <sri...@gm...> > Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in Hadoop > To: "Tung Nguyen" <bet...@ya...> > Cc: kos...@li... > Date: Sunday, May 8, 2011, 2:46 PM > > > The bit you missed: > Update the LD_LIBRARY_PATH environment variable so that libkfsClient.so can be loaded > > The easiest way to fix this is to copy the libkfsaccess.so from build/lib > to: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > > > Sriram > > > On Sun, May 8, 2011 at 8:22 AM, Tung Nguyen <bet...@ya...> wrote: > > Hi,I tried to use Hadoop with KFS. I followed the instruction at http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop > but when I issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I got the following error. I used hadoop-0.20.0 and KFS-0.5 > java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path > at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) > at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103) > at org.apache.hadoop.fs.kfs.KFSImpl.<init>(KFSImpl.java:45) > at org.apache.hadoop.fs.kfs.KosmosFileSystem.initialize(KosmosFileSystem.java:70) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) at org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) > at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at > org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) > Unable to load kfs_access native library: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > I am sure that KFS and Hadoop are working fine. I used kfsshell to create dir, copy file to kfs and hdfs without any problem. > In the /home/mine/hadoop-0.20.2/lib/native/Linux-amd64-64 dir I have:-rw-r--r-- 1 mine mine 115416 2010-02-19 03:07 libhadoop.a-rw-r--r-- 1 mine mine 904 2010-02-19 03:07 libhadoop.la > -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 > libhadoop.so.1.0.0 > Please help me out.Thank youTT > ------------------------------------------------------------------------------ > > WhatsUp Gold - Download Free Network Management Software > > The most intuitive, comprehensive, and cost-effective > network > > management toolset available today. Delivers lowest initial > > acquisition cost and overall TCO of any competing solution. > > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > > Kosmosfs-users mailing list > > Kos...@li... > > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > > > > |
From: Sriram R. <sri...@gm...> - 2011-05-09 04:42:36
|
You are missing libkfs_access.so can you copy this file over? Sriram On Sunday, May 8, 2011, Tung Nguyen <bet...@ya...> wrote: > Hi Sriram, > Thank you for your reply. Actually, in the http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop, there's an instruction about LD_LIBRARY_PATH. I followed that so I have LD_LIBRARY_PATH point to kfs/lib.I also exported it out$ echo $LD_LIBRARY_PATH /home/mine/kfs-0.5/build/lib > Anyway, I tried to copy that lib to the Hadoop native lib directory: ls lib/native/Linux-amd64-64/libhadoop.a libhadoop.so libhadoop.so.1.0.0libhadoop.la libhadoop.so.1 libkfsClient.so > I still got the same error: java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path at > java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103)....ThanksTT > > --- On Sun, 5/8/11, Sriram Rao <sri...@gm...> wrote: > > From: Sriram Rao <sri...@gm...> > Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in Hadoop > To: "Tung Nguyen" <bet...@ya...> > Cc: kos...@li... > Date: Sunday, May 8, 2011, 2:46 PM > > > The bit you missed: > Update the LD_LIBRARY_PATH environment variable so that libkfsClient.so can be loaded > > The easiest way to fix this is to copy the libkfsaccess.so from build/lib > to: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > > > Sriram > > > On Sun, May 8, 2011 at 8:22 AM, Tung Nguyen <bet...@ya...> wrote: > > Hi,I tried to use Hadoop with KFS. I followed the instruction at http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop > but when I issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I got the following error. I used hadoop-0.20.0 and KFS-0.5 > java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path > at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) > at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103) > at org.apache.hadoop.fs.kfs.KFSImpl.<init>(KFSImpl.java:45) > at org.apache.hadoop.fs.kfs.KosmosFileSystem.initialize(KosmosFileSystem.java:70) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) at org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) > at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at > org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) > Unable to load kfs_access native library: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > I am sure that KFS and Hadoop are working fine. I used kfsshell to create dir, copy file to kfs and hdfs without any problem. > In the /home/mine/hadoop-0.20.2/lib/native/Linux-amd64-64 dir I have:-rw-r--r-- 1 mine mine 115416 2010-02-19 03:07 libhadoop.a-rw-r--r-- 1 mine mine 904 2010-02-19 03:07 libhadoop.la > -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 > libhadoop.so.1.0.0 > Please help me out.Thank youTT > ------------------------------------------------------------------------------ > > WhatsUp Gold - Download Free Network Management Software > > The most intuitive, comprehensive, and cost-effective > network > > management toolset available today. Delivers lowest initial > > acquisition cost and overall TCO of any competing solution. > > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > > Kosmosfs-users mailing list > > Kos...@li... > > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > > > > |
From: Tung N. <bet...@ya...> - 2011-05-09 02:26:19
|
Hi Sriram, Thank you for your reply. Actually, in the http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop, there's an instruction about LD_LIBRARY_PATH. I followed that so I have LD_LIBRARY_PATH point to kfs/lib.I also exported it out$ echo $LD_LIBRARY_PATH /home/mine/kfs-0.5/build/lib Anyway, I tried to copy that lib to the Hadoop native lib directory: ls lib/native/Linux-amd64-64/libhadoop.a libhadoop.so libhadoop.so.1.0.0libhadoop.la libhadoop.so.1 libkfsClient.so I still got the same error: java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103)....ThanksTT --- On Sun, 5/8/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] Unable to load kfs_access native library in Hadoop To: "Tung Nguyen" <bet...@ya...> Cc: kos...@li... Date: Sunday, May 8, 2011, 2:46 PM The bit you missed: Update the LD_LIBRARY_PATH environment variable so that libkfsClient.so can be loaded The easiest way to fix this is to copy the libkfsaccess.so from build/lib to: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 Sriram On Sun, May 8, 2011 at 8:22 AM, Tung Nguyen <bet...@ya...> wrote: Hi,I tried to use Hadoop with KFS. I followed the instruction at http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop but when I issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I got the following error. I used hadoop-0.20.0 and KFS-0.5 java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103) at org.apache.hadoop.fs.kfs.KFSImpl.<init>(KFSImpl.java:45) at org.apache.hadoop.fs.kfs.KosmosFileSystem.initialize(KosmosFileSystem.java:70) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) at org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) Unable to load kfs_access native library: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 I am sure that KFS and Hadoop are working fine. I used kfsshell to create dir, copy file to kfs and hdfs without any problem. In the /home/mine/hadoop-0.20.2/lib/native/Linux-amd64-64 dir I have:-rw-r--r-- 1 mine mine 115416 2010-02-19 03:07 libhadoop.a-rw-r--r-- 1 mine mine 904 2010-02-19 03:07 libhadoop.la -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1.0.0 Please help me out.Thank youTT ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users |
From: Sriram R. <sri...@gm...> - 2011-05-08 18:46:36
|
The bit you missed: Update the LD_LIBRARY_PATH environment variable so that libkfsClient.so can be loaded The easiest way to fix this is to copy the libkfsaccess.so from build/lib to: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 Sriram On Sun, May 8, 2011 at 8:22 AM, Tung Nguyen <bet...@ya...> wrote: > Hi, > I tried to use Hadoop with KFS. I followed the instruction at > http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoop > but when I issued #bin/hadoop fs -fs kfs://localhost:20000 -ls / > I got the following error. I used hadoop-0.20.0 and KFS-0.5 > > java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path > at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) > at java.lang.Runtime.loadLibrary0(Runtime.java:823) > at java.lang.System.loadLibrary(System.java:1028) > at > org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103) > at org.apache.hadoop.fs.kfs.KFSImpl.<init>(KFSImpl.java:45) > at > org.apache.hadoop.fs.kfs.KosmosFileSystem.initialize(KosmosFileSystem.java:70) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) > at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) > Unable to load kfs_access native library: > /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 > > I am sure that KFS and Hadoop are working fine. I used kfsshell to create > dir, copy file to kfs and hdfs without any problem. > In the /home/mine/hadoop-0.20.2/lib/native/Linux-amd64-64 dir I have: > -rw-r--r-- 1 mine mine 115416 2010-02-19 03:07 libhadoop.a > -rw-r--r-- 1 mine mine 904 2010-02-19 03:07 libhadoop.la > -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so > -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1 > -rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1.0.0 > > Please help me out. > Thank you > TT > > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > Kosmosfs-users mailing list > Kos...@li... > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > |
From: Tung N. <bet...@ya...> - 2011-05-08 15:22:31
|
Hi,I tried to use Hadoop with KFS. I followed the instruction at http://sourceforge.net/apps/trac/kosmosfs/wiki/UsingWithHadoopbut when I issued #bin/hadoop fs -fs kfs://localhost:20000 -ls /I got the following error. I used hadoop-0.20.0 and KFS-0.5 java.lang.UnsatisfiedLinkError: no kfs_access in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1709) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at org.kosmix.kosmosfs.access.KfsAccess.<clinit>(KfsAccess.java:103) at org.apache.hadoop.fs.kfs.KFSImpl.<init>(KFSImpl.java:45) at org.apache.hadoop.fs.kfs.KosmosFileSystem.initialize(KosmosFileSystem.java:70) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) at org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)Unable to load kfs_access native library: /home/mine/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 I am sure that KFS and Hadoop are working fine. I used kfsshell to create dir, copy file to kfs and hdfs without any problem. In the /home/mine/hadoop-0.20.2/lib/native/Linux-amd64-64 dir I have:-rw-r--r-- 1 mine mine 115416 2010-02-19 03:07 libhadoop.a-rw-r--r-- 1 mine mine 904 2010-02-19 03:07 libhadoop.la-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1-rw-r--r-- 1 mine mine 75669 2010-02-19 03:07 libhadoop.so.1.0.0 Please help me out.Thank youTT |
From: Sriram R. <sri...@gm...> - 2011-05-06 23:01:46
|
Hi, We have moved the KFS project hosting pages to googlecode (given easier/better tools). I have created a googlegroups for this project (kfs-user); I'll send an invite from it so that you can add yourself to the mailing list. Sriram |
From: Tung N. <bet...@ya...> - 2011-05-06 15:07:11
|
Hi Sriram,I found it. It's my fault. As I restart everything, I checked with ps command to make sure the metaserver is running and discovered that there are two metaserver processes are running :D. I killed them all and restart. It's work now (with the source code compilation, not the binaries copied)Thank you very much again for your patience.TT --- On Fri, 5/6/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Date: Friday, May 6, 2011, 10:37 AM Hi Tung, Can you put the cluster key in the chunkserver.prp file and restart everything? Also, please include the .prp file from the chunkserver as well (in the next mail). This is very odd... Sriram On Fri, May 6, 2011 at 7:19 AM, Tung Nguyen <bet...@ya...> wrote: Hi Sriram,I tried to run without cluster key and the log file seems to be the same05-06-2011 10:29:33.746 INFO - (ChunkServer_main.cc:313) Starting chunkserver... 05-06-2011 10:29:33.794 DEBUG - (ChunkServer_main.cc:374) md5sum calculated from binary: 0c0eb3e3e5aeb78799e66fd74eded0cb05-06-2011 10:29:33.794 INFO - (ChunkServer_main.cc:321) md5sum to send to metaserver: 0c0eb3e3e5aeb78799e66fd74eded0cb 05-06-2011 10:29:33.804 DEBUG - (ChunkManager.cc:346) setting # of open files to: 102405-06-2011 10:29:33.806 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 23291871232 05-06-2011 10:29:33.806 INFO - (ChunkServer.cc:105) gethostname returned: lab314pc1305-06-2011 10:29:34.797 INFO - (ChunkManager.cc:2568) Checking chunkdirs...05-06-2011 10:29:34.797 INFO - (MetaServerSM.cc:169) connecting to metaserver mistserv 2010005-06-2011 10:29:34.798 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 23291871232 05-06-2011 10:29:34.798 INFO - (MetaServerSM.cc:250) Sent hello to meta server: meta-hello: mylocation = 172.31.18.15 40000cluster key:05-06-2011 10:29:34.799 DEBUG - (MetaServerSM.cc:479) recv meta cmd: seq: 928410945073645376 meta-heartbeat: 05-06-2011 10:29:34.800 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2329187123205-06-2011 10:29:34.800 FATAL - (MetaServerSM.cc:372) Aborting due to cluster key mismatch; our key: The cluster key in meta is the same with the chunk:#vi MetaServer.prpmetaServer.clientPort = 20000metaServer.chunkServerPort = 20100metaServer.clusterKey = lagoon-clustermetaServer.cpDir = /home/mine/kfsRun/meta/bin/kfscpmetaServer.logDir = /home/mine/kfsRun/meta/bin/kfslog TT --- On Fri, 5/6/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Date: Friday, May 6, 2011, 12:24 AM Can you run without the cluster key in the chunkserver config? Also, what is the cluster key in the metaserver's config? Sriram On Thu, May 5, 2011 at 9:07 PM, Tung Nguyen <bet...@ya...> wrote: Sriram,I got the followings in the logs dir. I hope this helps. However, the cluster key of the chunkserver and metaserver are the same. 05-06-2011 00:00:55.822 INFO - (ChunkServer_main.cc:313) Starting chunkserver...05-06-2011 00:00:55.870 DEBUG - (ChunkServer_main.cc:374) md5sum calculated from binary: 0c0eb3e3e5aeb78799e66fd74eded0cb 05-06-2011 00:00:55.870 INFO - (ChunkServer_main.cc:321) md5sum to send to metaserver: 0c0eb3e3e5aeb78799e66fd74eded0cb05-06-2011 00:00:55.880 DEBUG - (ChunkManager.cc:346) setting # of open files to: 1024 05-06-2011 00:00:55.882 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2332118220805-06-2011 00:00:55.882 INFO - (ChunkServer.cc:105) gethostname returned: lab314pc13 05-06-2011 00:00:56.872 INFO - (ChunkManager.cc:2568) Checking chunkdirs...05-06-2011 00:00:56.872 INFO - (MetaServerSM.cc:169) connecting to metaserver mistserv 2010005-06-2011 00:00:56.873 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 23321182208 05-06-2011 00:00:56.873 INFO - (MetaServerSM.cc:250) Sent hello to meta server: meta-hello: mylocation = 172.31.18.15 40000cluster key: lagoon-cluster05-06-2011 00:00:56.874 DEBUG - (MetaServerSM.cc:479) recv meta cmd: seq: 3342547832013168917 meta-heartbeat: 05-06-2011 00:00:56.875 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2332118220805-06-2011 00:00:56.875 FATAL - (MetaServerSM.cc:372) Aborting due to cluster key mismatch; our key: lagoon-cluster TT --- On Thu, 5/5/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Cc: kos...@li... Date: Thursday, May 5, 2011, 5:27 PM Hi Tung, Can you load the core into gdb and get me a backtrace? Thanks. Sriram On Thu, May 5, 2011 at 9:35 AM, Tung Nguyen <bet...@ya...> wrote: Hi, I would like to install KFS on a cluster that has chunkservers running in Centos and metaserver running in Ubuntu (our gateway). I ran the kfssetup script to distribute binary to the chunkservers but I failed to start the system. I checked the kfsping and there's no chunkserver connecting to the metaserver. I digged into the chunkserver (CentOS) and modified the kfsrun script to print out the errors. I got: #scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: error while loading shared libraries: libcrypto.so.0.9.8: cannot open shared object file: No such file or directory In Centos /lib or /lib64, I only found libcrypto.so.0.9.8e so I created symbolic link from libcrypto.so.0.9.8 to libcrypto.so.0.9.8e and ran it again. I got: scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: /lib64/libcrypto.so.0.9.8: no version information available (required by bin/chunkserver) bin/chunkserver: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by bin/chunkserver) Issuing: "#ldd -v /bin/sh I" had: libtermcap.so.2 => /lib64/libtermcap.so.2 (0x0000003c4fc00000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003c4f800000) libc.so.6 => /lib64/libc.so.6 (0x0000003c4f000000) /lib64/ld-linux-x86-64.so.2 (0x0000003c4ec00000) Version information: /bin/sh: libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2 libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libtermcap.so.2: libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libdl.so.2: ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 Please let me know what can I do? I also tried to download, compile from source and ran the kfsrun script but I got a core dump ... Total space = 31457280 cleanup on start = 0 Chunk server rack: 18 using cluster key = lagoon-cluster chunkserver: /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388: KFS::ChunkManager::~ChunkManager(): Assertion `mChunkTable.empty() && ! mChunkManagerTimeoutImpl' failed. scripts/kfsrun.sh: line 25: 12193 Aborted (core dumped) build/bin/$server $config $SERVER_LOG_FILE Thank youTT ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users |
From: Tung N. <bet...@ya...> - 2011-05-06 14:19:22
|
Hi Sriram,I tried to run without cluster key and the log file seems to be the same05-06-2011 10:29:33.746 INFO - (ChunkServer_main.cc:313) Starting chunkserver...05-06-2011 10:29:33.794 DEBUG - (ChunkServer_main.cc:374) md5sum calculated from binary: 0c0eb3e3e5aeb78799e66fd74eded0cb05-06-2011 10:29:33.794 INFO - (ChunkServer_main.cc:321) md5sum to send to metaserver: 0c0eb3e3e5aeb78799e66fd74eded0cb05-06-2011 10:29:33.804 DEBUG - (ChunkManager.cc:346) setting # of open files to: 102405-06-2011 10:29:33.806 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2329187123205-06-2011 10:29:33.806 INFO - (ChunkServer.cc:105) gethostname returned: lab314pc1305-06-2011 10:29:34.797 INFO - (ChunkManager.cc:2568) Checking chunkdirs...05-06-2011 10:29:34.797 INFO - (MetaServerSM.cc:169) connecting to metaserver mistserv 2010005-06-2011 10:29:34.798 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2329187123205-06-2011 10:29:34.798 INFO - (MetaServerSM.cc:250) Sent hello to meta server: meta-hello: mylocation = 172.31.18.15 40000cluster key:05-06-2011 10:29:34.799 DEBUG - (MetaServerSM.cc:479) recv meta cmd: seq: 928410945073645376 meta-heartbeat:05-06-2011 10:29:34.800 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2329187123205-06-2011 10:29:34.800 FATAL - (MetaServerSM.cc:372) Aborting due to cluster key mismatch; our key: The cluster key in meta is the same with the chunk:#vi MetaServer.prpmetaServer.clientPort = 20000metaServer.chunkServerPort = 20100metaServer.clusterKey = lagoon-clustermetaServer.cpDir = /home/mine/kfsRun/meta/bin/kfscpmetaServer.logDir = /home/mine/kfsRun/meta/bin/kfslog TT --- On Fri, 5/6/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Date: Friday, May 6, 2011, 12:24 AM Can you run without the cluster key in the chunkserver config? Also, what is the cluster key in the metaserver's config? Sriram On Thu, May 5, 2011 at 9:07 PM, Tung Nguyen <bet...@ya...> wrote: Sriram,I got the followings in the logs dir. I hope this helps. However, the cluster key of the chunkserver and metaserver are the same. 05-06-2011 00:00:55.822 INFO - (ChunkServer_main.cc:313) Starting chunkserver...05-06-2011 00:00:55.870 DEBUG - (ChunkServer_main.cc:374) md5sum calculated from binary: 0c0eb3e3e5aeb78799e66fd74eded0cb 05-06-2011 00:00:55.870 INFO - (ChunkServer_main.cc:321) md5sum to send to metaserver: 0c0eb3e3e5aeb78799e66fd74eded0cb05-06-2011 00:00:55.880 DEBUG - (ChunkManager.cc:346) setting # of open files to: 1024 05-06-2011 00:00:55.882 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2332118220805-06-2011 00:00:55.882 INFO - (ChunkServer.cc:105) gethostname returned: lab314pc13 05-06-2011 00:00:56.872 INFO - (ChunkManager.cc:2568) Checking chunkdirs...05-06-2011 00:00:56.872 INFO - (MetaServerSM.cc:169) connecting to metaserver mistserv 2010005-06-2011 00:00:56.873 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 23321182208 05-06-2011 00:00:56.873 INFO - (MetaServerSM.cc:250) Sent hello to meta server: meta-hello: mylocation = 172.31.18.15 40000cluster key: lagoon-cluster05-06-2011 00:00:56.874 DEBUG - (MetaServerSM.cc:479) recv meta cmd: seq: 3342547832013168917 meta-heartbeat: 05-06-2011 00:00:56.875 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2332118220805-06-2011 00:00:56.875 FATAL - (MetaServerSM.cc:372) Aborting due to cluster key mismatch; our key: lagoon-cluster TT --- On Thu, 5/5/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Cc: kos...@li... Date: Thursday, May 5, 2011, 5:27 PM Hi Tung, Can you load the core into gdb and get me a backtrace? Thanks. Sriram On Thu, May 5, 2011 at 9:35 AM, Tung Nguyen <bet...@ya...> wrote: Hi, I would like to install KFS on a cluster that has chunkservers running in Centos and metaserver running in Ubuntu (our gateway). I ran the kfssetup script to distribute binary to the chunkservers but I failed to start the system. I checked the kfsping and there's no chunkserver connecting to the metaserver. I digged into the chunkserver (CentOS) and modified the kfsrun script to print out the errors. I got: #scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: error while loading shared libraries: libcrypto.so.0.9.8: cannot open shared object file: No such file or directory In Centos /lib or /lib64, I only found libcrypto.so.0.9.8e so I created symbolic link from libcrypto.so.0.9.8 to libcrypto.so.0.9.8e and ran it again. I got: scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: /lib64/libcrypto.so.0.9.8: no version information available (required by bin/chunkserver) bin/chunkserver: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by bin/chunkserver) Issuing: "#ldd -v /bin/sh I" had: libtermcap.so.2 => /lib64/libtermcap.so.2 (0x0000003c4fc00000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003c4f800000) libc.so.6 => /lib64/libc.so.6 (0x0000003c4f000000) /lib64/ld-linux-x86-64.so.2 (0x0000003c4ec00000) Version information: /bin/sh: libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2 libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libtermcap.so.2: libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libdl.so.2: ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 Please let me know what can I do? I also tried to download, compile from source and ran the kfsrun script but I got a core dump ... Total space = 31457280 cleanup on start = 0 Chunk server rack: 18 using cluster key = lagoon-cluster chunkserver: /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388: KFS::ChunkManager::~ChunkManager(): Assertion `mChunkTable.empty() && ! mChunkManagerTimeoutImpl' failed. scripts/kfsrun.sh: line 25: 12193 Aborted (core dumped) build/bin/$server $config $SERVER_LOG_FILE Thank youTT ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users |
From: Tung N. <bet...@ya...> - 2011-05-06 04:15:36
|
Hi Sriram,Sorry for spamming again. I was sleepy and forgot to give the backtrace (gdb) bt 10#0 0x0000003c4f030265 in raise () from /lib64/libc.so.6#1 0x0000003c4f031d10 in abort () from /lib64/libc.so.6#2 0x0000003c4f0296e6 in __assert_fail () from /lib64/libc.so.6#3 0x00000000004cf66b in KFS::ChunkManager::~ChunkManager (this=0x7d90c0, __in_chrg=<value optimized out>) at /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388#4 0x00000000004cf6e6 in __tcf_7 () at /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:79#5 0x0000003c4f0333a5 in exit () from /lib64/libc.so.6#6 0x0000000000524578 in KFS::MetaServerSM::HandleReply (this=0x7d9c60, iobuf=0xa62fab0, msgLen=48) at /home/mine/kfs-0.5/src/cc/chunk/MetaServerSM.cc:375#7 0x0000000000524e93 in KFS::MetaServerSM::HandleMsg (this=0x7d9c60, iobuf=0xa62fab0, msgLen=48) at /home/mine/kfs-0.5/src/cc/chunk/MetaServerSM.cc:351#8 0x0000000000523140 in KFS::MetaServerSM::HandleRequest (this=0x7d9c60, code=1, data=0xa62fab0) at /home/mine/kfs-0.5/src/cc/chunk/MetaServerSM.cc:280#9 0x0000000000526c02 in KFS::ObjectMethod<KFS::MetaServerSM>::execute (this=0x7d9c68, code=1, data=0xa62fab0) at /home/mine/kfs-0.5/src/cc/libkfsIO/KfsCallbackObj.h:88 TT --- On Thu, 5/5/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Cc: kos...@li... Date: Thursday, May 5, 2011, 5:27 PM Hi Tung, Can you load the core into gdb and get me a backtrace? Thanks. Sriram On Thu, May 5, 2011 at 9:35 AM, Tung Nguyen <bet...@ya...> wrote: Hi, I would like to install KFS on a cluster that has chunkservers running in Centos and metaserver running in Ubuntu (our gateway). I ran the kfssetup script to distribute binary to the chunkservers but I failed to start the system. I checked the kfsping and there's no chunkserver connecting to the metaserver. I digged into the chunkserver (CentOS) and modified the kfsrun script to print out the errors. I got: #scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: error while loading shared libraries: libcrypto.so.0.9.8: cannot open shared object file: No such file or directory In Centos /lib or /lib64, I only found libcrypto.so.0.9.8e so I created symbolic link from libcrypto.so.0.9.8 to libcrypto.so.0.9.8e and ran it again. I got: scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: /lib64/libcrypto.so.0.9.8: no version information available (required by bin/chunkserver) bin/chunkserver: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by bin/chunkserver) Issuing: "#ldd -v /bin/sh I" had: libtermcap.so.2 => /lib64/libtermcap.so.2 (0x0000003c4fc00000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003c4f800000) libc.so.6 => /lib64/libc.so.6 (0x0000003c4f000000) /lib64/ld-linux-x86-64.so.2 (0x0000003c4ec00000) Version information: /bin/sh: libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2 libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libtermcap.so.2: libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libdl.so.2: ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 Please let me know what can I do? I also tried to download, compile from source and ran the kfsrun script but I got a core dump ... Total space = 31457280 cleanup on start = 0 Chunk server rack: 18 using cluster key = lagoon-cluster chunkserver: /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388: KFS::ChunkManager::~ChunkManager(): Assertion `mChunkTable.empty() && ! mChunkManagerTimeoutImpl' failed. scripts/kfsrun.sh: line 25: 12193 Aborted (core dumped) build/bin/$server $config $SERVER_LOG_FILE Thank youTT ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users |
From: Tung N. <bet...@ya...> - 2011-05-06 04:07:11
|
Sriram,I got the followings in the logs dir. I hope this helps. However, the cluster key of the chunkserver and metaserver are the same. 05-06-2011 00:00:55.822 INFO - (ChunkServer_main.cc:313) Starting chunkserver...05-06-2011 00:00:55.870 DEBUG - (ChunkServer_main.cc:374) md5sum calculated from binary: 0c0eb3e3e5aeb78799e66fd74eded0cb05-06-2011 00:00:55.870 INFO - (ChunkServer_main.cc:321) md5sum to send to metaserver: 0c0eb3e3e5aeb78799e66fd74eded0cb05-06-2011 00:00:55.880 DEBUG - (ChunkManager.cc:346) setting # of open files to: 102405-06-2011 00:00:55.882 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2332118220805-06-2011 00:00:55.882 INFO - (ChunkServer.cc:105) gethostname returned: lab314pc1305-06-2011 00:00:56.872 INFO - (ChunkManager.cc:2568) Checking chunkdirs...05-06-2011 00:00:56.872 INFO - (MetaServerSM.cc:169) connecting to metaserver mistserv 2010005-06-2011 00:00:56.873 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2332118220805-06-2011 00:00:56.873 INFO - (MetaServerSM.cc:250) Sent hello to meta server: meta-hello: mylocation = 172.31.18.15 40000cluster key: lagoon-cluster05-06-2011 00:00:56.874 DEBUG - (MetaServerSM.cc:479) recv meta cmd: seq: 3342547832013168917 meta-heartbeat:05-06-2011 00:00:56.875 DEBUG - (ChunkManager.cc:2533) Dir: /home/mine/kfsRun/chunk/bin/kfschunk has space 2332118220805-06-2011 00:00:56.875 FATAL - (MetaServerSM.cc:372) Aborting due to cluster key mismatch; our key: lagoon-cluster TT --- On Thu, 5/5/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Cc: kos...@li... Date: Thursday, May 5, 2011, 5:27 PM Hi Tung, Can you load the core into gdb and get me a backtrace? Thanks. Sriram On Thu, May 5, 2011 at 9:35 AM, Tung Nguyen <bet...@ya...> wrote: Hi, I would like to install KFS on a cluster that has chunkservers running in Centos and metaserver running in Ubuntu (our gateway). I ran the kfssetup script to distribute binary to the chunkservers but I failed to start the system. I checked the kfsping and there's no chunkserver connecting to the metaserver. I digged into the chunkserver (CentOS) and modified the kfsrun script to print out the errors. I got: #scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: error while loading shared libraries: libcrypto.so.0.9.8: cannot open shared object file: No such file or directory In Centos /lib or /lib64, I only found libcrypto.so.0.9.8e so I created symbolic link from libcrypto.so.0.9.8 to libcrypto.so.0.9.8e and ran it again. I got: scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: /lib64/libcrypto.so.0.9.8: no version information available (required by bin/chunkserver) bin/chunkserver: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by bin/chunkserver) Issuing: "#ldd -v /bin/sh I" had: libtermcap.so.2 => /lib64/libtermcap.so.2 (0x0000003c4fc00000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003c4f800000) libc.so.6 => /lib64/libc.so.6 (0x0000003c4f000000) /lib64/ld-linux-x86-64.so.2 (0x0000003c4ec00000) Version information: /bin/sh: libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2 libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libtermcap.so.2: libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libdl.so.2: ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 Please let me know what can I do? I also tried to download, compile from source and ran the kfsrun script but I got a core dump ... Total space = 31457280 cleanup on start = 0 Chunk server rack: 18 using cluster key = lagoon-cluster chunkserver: /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388: KFS::ChunkManager::~ChunkManager(): Assertion `mChunkTable.empty() && ! mChunkManagerTimeoutImpl' failed. scripts/kfsrun.sh: line 25: 12193 Aborted (core dumped) build/bin/$server $config $SERVER_LOG_FILE Thank youTT ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users |
From: Tung N. <bet...@ya...> - 2011-05-06 03:58:42
|
Hi Sriram,Thank you for your quick reply. I am just a newbie so I am not sure I got what you want correctly.I tried to issue # gdb build/src/cc/chunk/chunkserver core.16216 and got...Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...d one.Loaded symbols for /lib64/libnss_files.so.2Core was generated by `build/bin/chunkserver build/bin/ChunkServer.prp logs/chun kserver.log'.Program terminated with signal 6, Aborted.#0 0x0000003c4f030265 in raise () from /lib64/libc.so.6 Thanks TT --- On Thu, 5/5/11, Sriram Rao <sri...@gm...> wrote: From: Sriram Rao <sri...@gm...> Subject: Re: [Kosmosfs-users] KFS on Centos and Ubuntu To: "Tung Nguyen" <bet...@ya...> Cc: kos...@li... Date: Thursday, May 5, 2011, 5:27 PM Hi Tung, Can you load the core into gdb and get me a backtrace? Thanks. Sriram On Thu, May 5, 2011 at 9:35 AM, Tung Nguyen <bet...@ya...> wrote: Hi, I would like to install KFS on a cluster that has chunkservers running in Centos and metaserver running in Ubuntu (our gateway). I ran the kfssetup script to distribute binary to the chunkservers but I failed to start the system. I checked the kfsping and there's no chunkserver connecting to the metaserver. I digged into the chunkserver (CentOS) and modified the kfsrun script to print out the errors. I got: #scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: error while loading shared libraries: libcrypto.so.0.9.8: cannot open shared object file: No such file or directory In Centos /lib or /lib64, I only found libcrypto.so.0.9.8e so I created symbolic link from libcrypto.so.0.9.8 to libcrypto.so.0.9.8e and ran it again. I got: scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more information. Starting chunkserver... bin/chunkserver: /lib64/libcrypto.so.0.9.8: no version information available (required by bin/chunkserver) bin/chunkserver: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by bin/chunkserver) Issuing: "#ldd -v /bin/sh I" had: libtermcap.so.2 => /lib64/libtermcap.so.2 (0x0000003c4fc00000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003c4f800000) libc.so.6 => /lib64/libc.so.6 (0x0000003c4f000000) /lib64/ld-linux-x86-64.so.2 (0x0000003c4ec00000) Version information: /bin/sh: libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2 libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libtermcap.so.2: libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libdl.so.2: ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 Please let me know what can I do? I also tried to download, compile from source and ran the kfsrun script but I got a core dump ... Total space = 31457280 cleanup on start = 0 Chunk server rack: 18 using cluster key = lagoon-cluster chunkserver: /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388: KFS::ChunkManager::~ChunkManager(): Assertion `mChunkTable.empty() && ! mChunkManagerTimeoutImpl' failed. scripts/kfsrun.sh: line 25: 12193 Aborted (core dumped) build/bin/$server $config $SERVER_LOG_FILE Thank youTT ------------------------------------------------------------------------------ WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ Kosmosfs-users mailing list Kos...@li... https://lists.sourceforge.net/lists/listinfo/kosmosfs-users |
From: Sriram R. <sri...@gm...> - 2011-05-05 21:27:44
|
Hi Tung, Can you load the core into gdb and get me a backtrace? Thanks. Sriram On Thu, May 5, 2011 at 9:35 AM, Tung Nguyen <bet...@ya...> wrote: > Hi, > I would like to install KFS on a cluster that has chunkservers running in > Centos and metaserver running in Ubuntu (our gateway). > > I ran the kfssetup script to distribute binary to the chunkservers but I > failed to start the system. I checked the kfsping and there's no chunkserver > connecting to the metaserver. > > I digged into the chunkserver (CentOS) and modified the kfsrun script to > print out the errors. I got: > #scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp > Usage: grep [OPTION]... PATTERN [FILE]... > Try `grep --help' for more information. > Starting chunkserver... > bin/chunkserver: error while loading shared libraries: libcrypto.so.0.9.8: > cannot open shared object file: No such file or directory > > In Centos /lib or /lib64, I only found libcrypto.so.0.9.8e > so I created symbolic link from libcrypto.so.0.9.8 > to libcrypto.so.0.9.8e and ran it again. I got: > scripts/kfsrun.sh -s -c -f bin/ChunkServer.prp > Usage: grep [OPTION]... PATTERN [FILE]... > Try `grep --help' for more information. > Starting chunkserver... > bin/chunkserver: /lib64/libcrypto.so.0.9.8: no version information > available (required by bin/chunkserver) > bin/chunkserver: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not > found (required by bin/chunkserver) > > Issuing: "#ldd -v /bin/sh I" had: > libtermcap.so.2 => /lib64/libtermcap.so.2 (0x0000003c4fc00000) > libdl.so.2 => /lib64/libdl.so.2 (0x0000003c4f800000) > libc.so.6 => /lib64/libc.so.6 (0x0000003c4f000000) > /lib64/ld-linux-x86-64.so.2 (0x0000003c4ec00000) > > Version information: > /bin/sh: > libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2 > libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 > libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6 > libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 > libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 > /lib64/libtermcap.so.2: > libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 > libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 > libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 > /lib64/libdl.so.2: > ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => > /lib64/ld-linux-x86-64.so.2 > libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6 > libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 > /lib64/libc.so.6: > ld-linux-x86-64.so.2 (GLIBC_2.3) => > /lib64/ld-linux-x86-64.so.2 > ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => > /lib64/ld-linux-x86-64.so.2 > > Please let me know what can I do? I also tried to download, compile from > source and ran the kfsrun script but I got a core dump > ... > Total space = 31457280 > cleanup on start = 0 > Chunk server rack: 18 > using cluster key = lagoon-cluster > chunkserver: /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388: > KFS::ChunkManager::~ChunkManager(): Assertion `mChunkTable.empty() && ! > mChunkManagerTimeoutImpl' failed. > scripts/kfsrun.sh: line 25: 12193 Aborted (core dumped) > build/bin/$server $config $SERVER_LOG_FILE > > Thank you > TT > > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > Kosmosfs-users mailing list > Kos...@li... > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > |
From: Tung N. <bet...@ya...> - 2011-05-05 16:35:14
|
Hi,I would like to install KFS on a cluster that has chunkservers running in Centos and metaserver running in Ubuntu (our gateway). I ran the kfssetup script to distribute binary to the chunkservers but I failed to start the system. I checked the kfsping and there's no chunkserver connecting to the metaserver. I digged into the chunkserver (CentOS) and modified the kfsrun script to print out the errors. I got:#scripts/kfsrun.sh -s -c -f bin/ChunkServer.prpUsage: grep [OPTION]... PATTERN [FILE]...Try `grep --help' for more information.Starting chunkserver...bin/chunkserver: error while loading shared libraries: libcrypto.so.0.9.8: cannot open shared object file: No such file or directory In Centos /lib or /lib64, I only found libcrypto.so.0.9.8eso I created symbolic link from libcrypto.so.0.9.8 to libcrypto.so.0.9.8e and ran it again. I got:scripts/kfsrun.sh -s -c -f bin/ChunkServer.prpUsage: grep [OPTION]... PATTERN [FILE]...Try `grep --help' for more information.Starting chunkserver...bin/chunkserver: /lib64/libcrypto.so.0.9.8: no version information available (required by bin/chunkserver)bin/chunkserver: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by bin/chunkserver) Issuing: "#ldd -v /bin/sh I" had: libtermcap.so.2 => /lib64/libtermcap.so.2 (0x0000003c4fc00000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003c4f800000) libc.so.6 => /lib64/libc.so.6 (0x0000003c4f000000) /lib64/ld-linux-x86-64.so.2 (0x0000003c4ec00000) Version information: /bin/sh: libdl.so.2 (GLIBC_2.2.5) => /lib64/libdl.so.2 libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libtermcap.so.2: libc.so.6 (GLIBC_2.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.3.4) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libdl.so.2: ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 libc.so.6 (GLIBC_PRIVATE) => /lib64/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /lib64/libc.so.6 /lib64/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2 Please let me know what can I do? I also tried to download, compile from source and ran the kfsrun script but I got a core dump...Total space = 31457280cleanup on start = 0Chunk server rack: 18using cluster key = lagoon-clusterchunkserver: /home/mine/kfs-0.5/src/cc/chunk/ChunkManager.cc:388: KFS::ChunkManager::~ChunkManager(): Assertion `mChunkTable.empty() && ! mChunkManagerTimeoutImpl' failed.scripts/kfsrun.sh: line 25: 12193 Aborted (core dumped) build/bin/$server $config $SERVER_LOG_FILE Thank youTT |
From: Sriram R. <sri...@gm...> - 2011-04-26 16:39:55
|
Hi Sanjeev, Can you run the kfsping command and send me the output? It looks like your chunkservers don't have any space (?). Sriram On Mon, Apr 25, 2011 at 8:18 AM, Sanjeev Khodaskar < san...@gm...> wrote: > Hi All, > > I am trying to use KFS for the first time. I have (a) already compiled the > code and (b) deployed it. > > When I try to use the cptokfs I get an error, this is the command I am > using " *./cptokfs -s 172.16.1.132 -p 20000 -d /home/sanjeev/src -k .* " I > get an error which says "*04-25-2011 20:46:09.234 DEBUG - > (KfsClient.cc:636) Connecting to metaserver at: 172.16.1.132:20000 > Write failed with error code: -16*" > > Help needed please ! > > Regards, > Sanjeev > > > > ------------------------------------------------------------------------------ > Fulfilling the Lean Software Promise > Lean software platforms are now widely adopted and the benefits have been > demonstrated beyond question. Learn why your peers are replacing JEE > containers with lightweight application servers - and what you can gain > from the move. http://p.sf.net/sfu/vmware-sfemails > _______________________________________________ > Kosmosfs-users mailing list > Kos...@li... > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > |
From: Sanjeev K. <san...@gm...> - 2011-04-25 15:18:54
|
Hi All, I am trying to use KFS for the first time. I have (a) already compiled the code and (b) deployed it. When I try to use the cptokfs I get an error, this is the command I am using " *./cptokfs -s 172.16.1.132 -p 20000 -d /home/sanjeev/src -k .* " I get an error which says "*04-25-2011 20:46:09.234 DEBUG - (KfsClient.cc:636) Connecting to metaserver at: 172.16.1.132:20000 Write failed with error code: -16*" Help needed please ! Regards, Sanjeev |
From: zhengrui.m <zhe...@gm...> - 2011-03-08 07:02:45
|
after reviewing code,the reason is : default mempory pool size is 200MB in chunk server. config it in property file ,key is: chunkServer.ioBufferPool.partitionBufferCount chunkServer.ioBufferPool.bufferSize 2011-03-08 zhengrui.m 发件人: zhengrui.m 发送时间: 2011-03-07 21:07:05 收件人: kosmosfs-users 抄送: 主题: out of io buffers hi, recently, chunk server always core, gdb it ,result is: #0 0x00000035b1030265 in raise () from /lib64/libc.so.6 #1 0x00000035b1031d10 in abort () from /lib64/libc.so.6 #2 0x00000000004e2a09 in QCUtils::FatalError (inMsgPtr=0x4ee897 "out of io buffers", inSysError=0) at /home/manzr/kfs/kfs/src/cc/qcdio/qcutils.cpp:112 #3 0x00000000004c4728 in KFS::DiskIoQueues::BufferAllocator::Allocate (this=0x174af9f0) at /home/manzr/kfs/kfs/src/cc/chunk/DiskIo.cc:417 #4 0x00000000004d225e in AllocBuffer (allocSize=4096) at /home/manzr/kfs/kfs/src/cc/libkfsIO/IOBuffer.cc:714 #5 0x00000000004d322c in KFS::IOBuffer::Read (this=0x2aaaac5b7408, fd=164, maxReadAhead=64754) at /home/manzr/kfs/kfs/src/cc/libkfsIO/IOBuffer.cc:768 #6 0x00000000004c61cb in KFS::NetConnection::HandleReadEvent (this=0x2aaaac5b73d0) at /home/manzr/kfs/kfs/src/cc/libkfsIO/NetConnection.cc:54 #7 0x00000000004c9f7a in KFS::NetManager::MainLoop (this=0x174b09b0) at /home/manzr/kfs/kfs/src/cc/libkfsIO/NetManager.cc:310 #8 0x0000000000491f8d in netWorker (dummy=0x0) at /home/manzr/kfs/kfs/src/cc/chunk/ChunkServer.cc:52 #9 0x00000035b1c0673d in start_thread () from /lib64/libpth what's wrong ~~... 2011-03-07 zhengrui.m |
From: zhengrui.m <zhe...@gm...> - 2011-03-07 13:07:26
|
hi, recently, chunk server always core, gdb it ,result is: #0 0x00000035b1030265 in raise () from /lib64/libc.so.6 #1 0x00000035b1031d10 in abort () from /lib64/libc.so.6 #2 0x00000000004e2a09 in QCUtils::FatalError (inMsgPtr=0x4ee897 "out of io buffers", inSysError=0) at /home/manzr/kfs/kfs/src/cc/qcdio/qcutils.cpp:112 #3 0x00000000004c4728 in KFS::DiskIoQueues::BufferAllocator::Allocate (this=0x174af9f0) at /home/manzr/kfs/kfs/src/cc/chunk/DiskIo.cc:417 #4 0x00000000004d225e in AllocBuffer (allocSize=4096) at /home/manzr/kfs/kfs/src/cc/libkfsIO/IOBuffer.cc:714 #5 0x00000000004d322c in KFS::IOBuffer::Read (this=0x2aaaac5b7408, fd=164, maxReadAhead=64754) at /home/manzr/kfs/kfs/src/cc/libkfsIO/IOBuffer.cc:768 #6 0x00000000004c61cb in KFS::NetConnection::HandleReadEvent (this=0x2aaaac5b73d0) at /home/manzr/kfs/kfs/src/cc/libkfsIO/NetConnection.cc:54 #7 0x00000000004c9f7a in KFS::NetManager::MainLoop (this=0x174b09b0) at /home/manzr/kfs/kfs/src/cc/libkfsIO/NetManager.cc:310 #8 0x0000000000491f8d in netWorker (dummy=0x0) at /home/manzr/kfs/kfs/src/cc/chunk/ChunkServer.cc:52 #9 0x00000035b1c0673d in start_thread () from /lib64/libpth what's wrong ~~... 2011-03-07 zhengrui.m |
From: jrckkyy <jr...@gm...> - 2011-01-26 03:40:54
|
Hello everybody, trouble you for a little while, in the course of kfs chunkserver and metaserver if ip is in the name is in the use of kfs own tools in the process there will be some problems, kfs it only supports non-ip name each node rules? I would like to ask why the need to set each machine's hostname, direct access through the ip or ip ssh connection is not better? Aside from the record hostname is to facilitate the cause of memory other than the purpose and role? If a machine can also be used to start multiple chunkserver ip and port number to distinguish! Thank you for your attention jrckkyy 2011.1.26 |
From: lin c. <cha...@gm...> - 2011-01-07 03:54:36
|
hi,all l am just using kosmosfs,but l find several problems 1.after Rename(old.c_str(), new.c_str()), call the function Exists (old.c_str),lt returns true but the old file have already be renamed 2 after creat a dir and two file in the dir,l write something into one of the file,then call the function GetDirSummary,but lt shows both numfiles and numBytes are zero 3 l don't know if l misuse the function UpdateFilesize after l write something into one file, the size returned is always zero, whether the function works only if multiple client write to the some file,and the function was used to indicate how much data be wrote by other clients not considering bytes wrote itself? thank you for the answers kindly |
From: Sriram R. <sri...@gm...> - 2010-12-17 22:37:54
|
Hi, 1. WIth O_APPEND, we don't modify a previously "closed" chunk. Simpler to allocate a new chunk and write to it. 2. For #2, the holes in a chunk are at the end of the chunk; so, you can ask the kfs-client to skip the holes in a file. For instance, use the API on the kfs-client: SkipHolesInFiles(fd) 3. Looks liek a bug to me. I'll take a look. Sriram On Fri, Dec 17, 2010 at 2:06 PM, sashidhar reddy <sas...@gm...>wrote: > Hi, > Thanks for open sourcing and supporting such a large application. I am > trying to evaluate KFS. Compiling and deploying was very easy. > > I tried to test on 5 node cluster on Ubuntu 9.10 and it seems to work well. > But there are couple of issues I noticed trying to append to an existing > file. > > I opened an existing file in O_APPEND mode. > > 1. If I call Write() with a buffer of length larger than the space in the > last chunk, it looks like a new chunk is created and all data is written to > new chunk and the currnet partial last chunk is left alone. i.e. If the last > chunk is 50MB and I try to write 20MB, the last chunk is left alone and all > 20MB is written to newly allocated chunk. Is there a reason why this is done > this way instead of filling the last chunk to 64MB and writing the rest of > the data (6MB) to the new chunk? > 2. When 1 happens, my file size will include the gaps and trying to read > from file gives back zero filled buffers. > 3. If I call Write with a buffer larger than 64MB looks like it goes into a > loop trying to allocate a chunk that can consume all data and finally seg > faults. > > Thanks, > MSR > > > ------------------------------------------------------------------------------ > Lotusphere 2011 > Register now for Lotusphere 2011 and learn how > to connect the dots, take your collaborative environment > to the next level, and enter the era of Social Business. > http://p.sf.net/sfu/lotusphere-d2d > _______________________________________________ > Kosmosfs-users mailing list > Kos...@li... > https://lists.sourceforge.net/lists/listinfo/kosmosfs-users > > |
From: sashidhar r. <sas...@gm...> - 2010-12-17 22:06:46
|
Hi, Thanks for open sourcing and supporting such a large application. I am trying to evaluate KFS. Compiling and deploying was very easy. I tried to test on 5 node cluster on Ubuntu 9.10 and it seems to work well. But there are couple of issues I noticed trying to append to an existing file. I opened an existing file in O_APPEND mode. 1. If I call Write() with a buffer of length larger than the space in the last chunk, it looks like a new chunk is created and all data is written to new chunk and the currnet partial last chunk is left alone. i.e. If the last chunk is 50MB and I try to write 20MB, the last chunk is left alone and all 20MB is written to newly allocated chunk. Is there a reason why this is done this way instead of filling the last chunk to 64MB and writing the rest of the data (6MB) to the new chunk? 2. When 1 happens, my file size will include the gaps and trying to read from file gives back zero filled buffers. 3. If I call Write with a buffer larger than 64MB looks like it goes into a loop trying to allocate a chunk that can consume all data and finally seg faults. Thanks, MSR |
From: zhengrui.m <zhe...@gm...> - 2010-09-07 08:38:32
|
hi I use kosmosfs to store intensive data . A problem happens in chunkserver usually.The log is : 09-07-2010 16:29:00.227 DEBUG - (ChunkManager.cc:1936) [1] starting sync for chunkId=16102833 09-07-2010 16:29:00.227 DEBUG - (ChunkManager.cc:1936) [2] starting sync for chunkId=17086223 09-07-2010 16:29:00.227 DEBUG - (ChunkManager.cc:1936) [3] starting sync for chunkId=17096405 09-07-2010 16:29:10.227 DEBUG - (ChunkManager.cc:1936) [1] starting sync for chunkId=16102833 09-07-2010 16:29:10.227 DEBUG - (ChunkManager.cc:1936) [2] starting sync for chunkId=17086223 09-07-2010 16:29:10.227 DEBUG - (ChunkManager.cc:1936) [3] starting sync for chunkId=17096405 09-07-2010 16:29:20.227 DEBUG - (ChunkManager.cc:1936) [1] starting sync for chunkId=16102833 09-07-2010 16:29:20.227 DEBUG - (ChunkManager.cc:1936) [2] starting sync for chunkId=17086223 09-07-2010 16:29:20.227 DEBUG - (ChunkManager.cc:1936) [3] starting sync for chunkId=17096405 09-07-2010 16:29:30.227 DEBUG - (ChunkManager.cc:1936) [1] starting sync for chunkId=16102833 09-07-2010 16:29:30.227 DEBUG - (ChunkManager.cc:1936) [2] starting sync for chunkId=17086223 09-07-2010 16:29:30.227 DEBUG - (ChunkManager.cc:1936) [3] starting sync for chunkId=17096405 09-07-2010 16:29:40.227 DEBUG - (ChunkManager.cc:1936) [1] starting sync for chunkId=16102833 09-07-2010 16:29:40.227 DEBUG - (ChunkManager.cc:1936) [2] starting sync for chunkId=17086223 09-07-2010 16:29:40.227 DEBUG - (ChunkManager.cc:1936) [3] starting sync for chunkId=17096405 09-07-2010 16:29:50.227 DEBUG - (ChunkManager.cc:1936) [1] starting sync for chunkId=16102833 09-07-2010 16:29:50.227 DEBUG - (ChunkManager.cc:1936) [2] starting sync for chunkId=17086223 09-07-2010 16:29:50.227 DEBUG - (ChunkManager.cc:1936) [3] starting sync for chunkId=17096405 where chunk 16102833 ,17086223 and 17096405 in this server is synced looply.but maybe the file of the chunk has been deleted,or the chunk didn't exist absolutely. This happen when copy locally large file to kfs,but killed ,or other aspect maybe. the kfs version i use is 0.4 .The solution is restart the chunkserver ,then everything is oK,while not a good method. What can i do~~~~ 2010-09-07 zhengrui.m |