From: Steve L. (JIRA) <ji...@sm...> - 2009-01-27 16:29:25
|
extend the job submission test, list input and output directories and have it do real work ------------------------------------------------------------------------------------------ Key: SFOS-1085 URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 Project: SmartFrog Issue Type: Task Components: _service_hadoop Affects Versions: 3.17.004 Reporter: Steve Loughran Assignee: Steve Loughran -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |
From: Steve L. (JIRA) <ji...@sm...> - 2009-01-28 13:09:23
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11487#action_11487 ] Steve Loughran commented on SFOS-1085: -------------------------------------- Not clear that this test is working, even when it does complete. Stack trace is long and inconclusive. ytes: 388, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-8039382618956567099_1682 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/ClusterManager.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8304f7d] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51676, bytes: 273, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_3756387938145631116_1706 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/ClusterStatusChecker.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8305138] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51677, bytes: 918, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_2736844895165688017_1681 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/ClusterStatusCheckerImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@83052f3] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51678, bytes: 7134, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-151612108248210467_1700 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/ClusterStatusCheckerImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@81d459e] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51679, bytes: 36433, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_2580541859997871987_1690 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/FileSystemNode.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823cced] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51680, bytes: 700, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-8892477666129656359_1702 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/FileSystemNodeImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823cea8] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51681, bytes: 2891, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_7448637537378266444_1687 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/FileSystemNodeImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823d063] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51682, bytes: 37252, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_528280405684072077_1686 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/HadoopComponentImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823d21e] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51683, bytes: 6182, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_7702295165346609381_1705 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/HadoopService.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823d3d9] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51684, bytes: 782, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7291738615149620953_1704 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/HadoopServiceImpl$1.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823d594] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51685, bytes: 1151, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7879315879575251214_1678 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/HadoopServiceImpl$ServiceDeployerThread.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:43 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823d74f] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51686, bytes: 2417, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5079093016260148727_1696 [sf-startdaemon-debug] 09/01/28 13:03:43 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/HadoopServiceImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823d90a] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51687, bytes: 16498, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_7312152800280489906_1688 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/HadoopServiceImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823dac5] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51688, bytes: 37180, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5416606570033813744_1697 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/IsHadoopServiceLive.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823dc80] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51689, bytes: 2177, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5833041824875415236_1701 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/IsHadoopServiceLive_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823de3b] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51690, bytes: 36227, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_7604467323023898504_1698 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/IsWorkerCountGood.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823dff6] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51691, bytes: 1177, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-3107497858792490423_1675 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/ManagerNode.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823e1b1] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51692, bytes: 315, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_34319402495588519_1685 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/PortEntry.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823e36c] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51696, bytes: 921, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-9005147428899715478_1689 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/checkdiskspace.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823e527] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51697, bytes: 1859, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-6877070989748094167_1680 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/checkport.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823e6e2] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51698, bytes: 3344, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_8082148993174291463_1693 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/clusternode.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823e89d] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51699, bytes: 1219, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_8178203634467891513_1683 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/clusterstatus.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823ea58] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51700, bytes: 2099, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-6757991164749210411_1703 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@823ec13] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51701, bytes: 1356, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_4225653665728256431_1679 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/filesystemnode.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825efbf] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51702, bytes: 1705, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-4259035259859732382_1694 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/jspstatus.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825f17a] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51703, bytes: 1440, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7886878630668876763_1676 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/cluster/servicelive.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825f335] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51704, bytes: 1431, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-384603820267427407_1684 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825f4f0] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51705, bytes: 1745, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-1394699304342918076_1712 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=listStatus src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/datanode dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/datanode/DatanodeImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825f6ab] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51706, bytes: 3700, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_8459723268522817280_1671 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/datanode/DatanodeImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825f866] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51707, bytes: 37309, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-4190781091691339469_1674 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/datanode/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825fa21] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51709, bytes: 890, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_5594242238943025765_1673 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/datanode/datanode.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825fbdc] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51710, bytes: 2272, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-3586916714493778508_1672 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=listStatus src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsClusterBoundImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825fd97] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51711, bytes: 2553, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7439215255224123172_1664 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsCopyFileImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@825ff52] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51712, bytes: 2017, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_8667751185680438065_1653 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsCopyFileInImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@845889a] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51713, bytes: 1790, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_523477942166664708_1662 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsCopyFileOutImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8458a55] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51714, bytes: 2500, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7627160912440978506_1667 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsCopyOperation.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8458c10] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51715, bytes: 440, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7684232014268795542_1668 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsCreateDirImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8458dcb] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51716, bytes: 1703, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_1765435770919155586_1658 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsCreateFileImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8458f86] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51717, bytes: 2289, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_4652770765456058050_1661 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsDeleteDirImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8459141] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51718, bytes: 2334, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-2602683863433486042_1660 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsFormatFileSystemImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@84592fc] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51719, bytes: 2017, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-724640246905785239_1655 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsFsckImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8459509] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51720, bytes: 2505, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_7029794110691250449_1659 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsListDirImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@84596c4] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51721, bytes: 3925, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7158438466280923443_1657 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsMoveFileImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@845987f] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51722, bytes: 1748, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_1315033697150460736_1652 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsOperation.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8459a3a] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51723, bytes: 294, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_3152904232304463029_1666 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsOperationImpl$DfsWorkerThread.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8459bf5] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51724, bytes: 955, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-9132655542725438042_1669 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsOperationImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8459db0] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51725, bytes: 3156, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_405556666994211879_1651 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsPathExistsImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8459f6b] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51726, bytes: 4966, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-6743034842847453594_1656 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsPathExistsImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:44 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@845a126] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51727, bytes: 36980, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_2074049454960901224_1670 [sf-startdaemon-debug] 09/01/28 13:03:44 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsPathOperation.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@845a2e1] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51728, bytes: 352, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_5063856634592732079_1663 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/DfsPathOperationImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8480895] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51729, bytes: 1384, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5633064329971641980_1665 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/dfs/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8487751] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51730, bytes: 6652, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_1316507451703662368_1654 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/hadoopconfiguration.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822abbf] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51731, bytes: 37834, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_2703733389034666637_1742 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=listStatus src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/io dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/io/TuplesToHadoop.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822ad7a] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51732, bytes: 781, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_3441096774395066789_1727 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/io/TuplesToHadoopImpl$1.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822af35] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51733, bytes: 297, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-2814726079046704444_1722 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/io/TuplesToHadoopImpl$TupleUploadThread.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822b0f0] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51734, bytes: 4245, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-6365610246856775193_1726 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/io/TuplesToHadoopImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822b2ab] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51735, bytes: 4200, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-8411669733163160663_1725 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/io/TuplesToHadoopImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822b466] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51736, bytes: 35869, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-996082216724767414_1724 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/io/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822b621] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51737, bytes: 1265, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-4425767344166628776_1723 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=listStatus src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/FormatImpl$1.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822b7dc] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51738, bytes: 285, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-6761307792903803939_1721 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/FormatImpl$Formatter.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822b997] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51739, bytes: 1659, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_5958708992091763784_1713 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/FormatImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822bb52] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51740, bytes: 2660, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7726437229711498187_1715 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/FormatImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@822bd0d] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51741, bytes: 37245, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_4673821143472726746_1720 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/NamenodeImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8302922] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51742, bytes: 4054, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_3713288782924478572_1717 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/NamenodeImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8302add] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51743, bytes: 37708, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5456836863058277450_1716 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8302c98] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51744, bytes: 1758, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_8714030984941725000_1714 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/format.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8302e53] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51746, bytes: 1291, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5540356030606119134_1719 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/namenode/namenode.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@830300e] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51747, bytes: 3117, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7587302327641301357_1718 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=listStatus src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/other dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/other/MockServiceImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@83031c9] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51748, bytes: 1887, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5856709649083753873_1708 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/other/MockServiceImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8303384] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51749, bytes: 37238, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-238587429292029560_1709 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/other/ServiceValueCheckerImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@830353f] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51750, bytes: 2930, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-5437334352085361885_1711 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/other/ServiceValueCheckerImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@83036fa] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51751, bytes: 37246, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_8502392515222803612_1710 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/other/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@83038b5] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51752, bytes: 1339, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-7497196745909495196_1707 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=listStatus src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/Job.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8303a70] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51754, bytes: 558, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_5548336666956257539_1734 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/JobImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8303c2b] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51755, bytes: 1401, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_1097932183955808666_1739 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/JobImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8303de6] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51756, bytes: 36666, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-911687399273434692_1735 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/Submitter.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8303fa1] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51757, bytes: 698, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-2984649822048552128_1733 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/SubmitterImpl$1.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@830415c] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51758, bytes: 296, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_7334343242760255051_1730 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/SubmitterImpl$JobSubmitThread.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@8304317] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51759, bytes: 4954, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_8295217975367425155_1729 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/SubmitterImpl.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@83044d2] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51760, bytes: 9455, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_1485022955479448923_1737 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 0 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/SubmitterImpl_Stub.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@830468d] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51761, bytes: 40212, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_4797148731736820996_1732 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 3 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/TaskCompletionEventLogger.class dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@830c91f] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51762, bytes: 2327, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_7614618015627608469_1736 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 1 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,sambashare ip=/127.0.0.1 cmd=open src=/tmp/hadoop/mapred/system/job_200901281302_0001/job.jar/build/classes/org/smartfrog/services/hadoop/components/submitter/components.sf dst=null perm=null [sf-startdaemon-debug] 09/01/28 13:03:45 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@830cb10] INFO DataNode.clienttrace : src: /127.0.0.1:50573, dest: /127.0.0.1:51763, bytes: 963, op: HDFS_READ, cliID: DFSClient_-1603896950, srvID: DS-1385289710-127.0.1.1-50573-1233147773291, blockid: blk_-8674095958572716354_1731 [sf-startdaemon-debug] 09/01/28 13:03:45 [IPC Server handler 2 on 8020] INFO FSNamesystem.audit : ugi=slo,users,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,admin,fuse,... [truncated message content] |
From: Steve L. (JIRA) <ji...@sm...> - 2009-02-02 14:38:45
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11499#action_11499 ] Steve Loughran commented on SFOS-1085: -------------------------------------- Now failing on permissions , related to mapred.jar being set to something odd and a copy-to-local going horribly, horribly wrong. [sf-startdaemon-debug] 09/02/02 14:23:53 [IPC Server handler 0 on 8012] INFO ipc.Server : IPC Server handler 0 on 8012, call submitJob(job_200902021423_0001) from 127.0.0.1:54487: error: org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/tmp/hadoop-slo/mapred/local/jobTracker/job_200902021423_0001.jar/build/classes/org/smartfrog/services/hadoop/components/io/TuplesToHadoopImpl.class': No such file or directory [sf-startdaemon-debug] org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/tmp/hadoop-slo/mapred/local/jobTracker/job_200902021423_0001.jar/build/classes/org/smartfrog/services/hadoop/components/io/TuplesToHadoopImpl.class': No such file or directory [sf-startdaemon-debug] at org.apache.hadoop.util.Shell.runCommand(Shell.java:195) [sf-startdaemon-debug] at org.apache.hadoop.util.Shell.run(Shell.java:134) [sf-startdaemon-debug] at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286) [sf-startdaemon-debug] at org.apache.hadoop.util.Shell.execCommand(Shell.java:354) [sf-startdaemon-debug] at org.apache.hadoop.util.Shell.execCommand(Shell.java:337) [sf-startdaemon-debug] at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:481) [sf-startdaemon-debug] at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:473) [sf-startdaemon-debug] at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:275) [sf-startdaemon-debug] at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:372) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:475) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:456) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:363) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:218) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:191) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1175) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1156) [sf-startdaemon-debug] at org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:243) > extend the job submission test, list input and output directories and have it do real work > ------------------------------------------------------------------------------------------ > > Key: SFOS-1085 > URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 > Project: SmartFrog > Issue Type: Task > Components: _service_hadoop > Affects Versions: 3.17.004 > Reporter: Steve Loughran > Assignee: Steve Loughran > -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |
From: Steve L. (JIRA) <ji...@sm...> - 2009-02-02 15:20:06
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11500#action_11500 ] Steve Loughran commented on SFOS-1085: -------------------------------------- Test now failing with io.sort.record.percent being 0.0 [sf-system-test-junit] at org.apache.hadoop.mapred.Child.main(Child.java:151) [sf-system-test-junit] Map: Task Id : attempt_200902021516_0001_m_000000_3, Status : TIPFAILED [sf-system-test-junit] java.io.IOException: Invalid "io.sort.record.percent": 0.0 [sf-system-test-junit] at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:648) [sf-system-test-junit] at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:423) [sf-system-test-junit] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:493) [sf-system-test-junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:293) [sf-system-test-junit] at org.apache.hadoop.mapred.Child.main(Child.java:151) > extend the job submission test, list input and output directories and have it do real work > ------------------------------------------------------------------------------------------ > > Key: SFOS-1085 > URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 > Project: SmartFrog > Issue Type: Task > Components: _service_hadoop > Affects Versions: 3.17.004 > Reporter: Steve Loughran > Assignee: Steve Loughran > -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |
From: Steve L. (JIRA) <ji...@sm...> - 2009-02-10 14:52:28
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11530#action_11530 ] Steve Loughran commented on SFOS-1085: -------------------------------------- also, tests not always timing out; means that detection of end of test run is failing -or the tests really arent finishing, because the submission is failing [sf-startdaemon-debug] 09/02/10 14:39:34 [DataNode] INFO datanode.DataNode : Balancing bandwith is 1048576 bytes/s [sf-startdaemon-debug] 09/02/10 14:39:34 [DataNode] INFO mortbay.log : jetty-6.1.14 [sf-startdaemon-debug] 09/02/10 14:39:34 [DataNode] INFO mortbay.log : Extract jar:file:/home/slo/.ivy/cache/org.apache.hadoop/hadoop-core/jars/hadoop-core-0.21.0-alpha-4.jar!/webapps/datanode to /tmp/Jetty_localhost_8022_datanode____.sbeinb/webapp [sf-startdaemon-debug] 09/02/10 14:39:34 [DataNode] INFO mortbay.log : Started SelectChannelConnector@localhost:8022 [sf-startdaemon-debug] 09/02/10 14:39:34 [DataNode] INFO jvm.JvmMetrics : Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized [sf-startdaemon-debug] 09/02/10 14:39:39 [DataNode] INFO metrics.RpcMetrics : Initializing RPC Metrics with hostName=ExtDataNode, port=50020 [sf-startdaemon-debug] 09/02/10 14:39:39 [IPC Server Responder] INFO ipc.Server : IPC Server Responder: starting [sf-startdaemon-debug] 09/02/10 14:39:39 [IPC Server listener on 50020] INFO ipc.Server : IPC Server listener on 50020: starting [sf-startdaemon-debug] 09/02/10 14:39:39 [IPC Server handler 0 on 50020] INFO ipc.Server : IPC Server handler 0 on 50020: starting [sf-startdaemon-debug] 09/02/10 14:39:39 [DataNode] INFO datanode.DataNode : dnRegistration = DatanodeRegistration(annecy:8024, storageID=, infoPort=8022, ipcPort=50020) [sf-startdaemon-debug] 09/02/10 14:39:39 [IPC Server handler 1 on 50020] INFO ipc.Server : IPC Server handler 1 on 50020: starting [sf-startdaemon-debug] 09/02/10 14:39:39 [IPC Server handler 1 on 8020] INFO net.NetworkTopology : Adding a new node: /default-rack/127.0.0.1:8024 [sf-startdaemon-debug] 09/02/10 14:39:39 [DataNode] INFO datanode.DataNode : New storage id DS-562223574-127.0.1.1-8024-1234276779531 is assigned to data-node 127.0.0.1:8024 [sf-startdaemon-debug] 09/02/10 14:39:39 [DataNode] INFO datanode.DataNode : State change: DataNode is now LIVE [sf-startdaemon-debug] 09/02/10 14:39:39 [ExtDataNode] INFO datanode.DataNode : DatanodeRegistration(127.0.0.1:8024, storageID=DS-562223574-127.0.1.1-8024-1234276779531, infoPort=8022, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/tempdir63916/current'} [sf-startdaemon-debug] 2009/02/10 14:39:39:540 GMT [INFO ][DataNode] HOST annecy:rootProcess:testJobSubmission - DataNode deployment complete: service is: DataNode {data=FSDataset{dirpath='/tmp/tempdir63916/current'}, localName='127.0.0.1:8024', storageID='DS-562223574-127.0.1.1-8024-1234276779531', xmitsInProgress=0, state=LIVE} [sf-startdaemon-debug] 09/02/10 14:39:39 [ExtDataNode] INFO datanode.DataNode : using BLOCKREPORT_INTERVAL of 10000msec Initial delay: 0msec [sf-startdaemon-debug] 09/02/10 14:39:39 [JobTracker] INFO mapred.ExtJobTracker : State change: JobTracker is now STARTED [sf-startdaemon-debug] 09/02/10 14:39:39 [TaskTracker] INFO mapred.ExtTaskTracker : State change: TaskTracker is now STARTED [sf-startdaemon-debug] 09/02/10 14:39:39 [JobTracker] INFO metrics.RpcMetrics : Initializing RPC Metrics with hostName=ExtJobTracker, port=8012 [sf-startdaemon-debug] 09/02/10 14:39:39 [ExtDataNode] INFO datanode.DataNode : BlockReport of 0 blocks got processed in 5 msecs [sf-startdaemon-debug] 09/02/10 14:39:39 [ExtDataNode] INFO datanode.DataNode : Starting Periodic block scanner. [sf-startdaemon-debug] 09/02/10 14:39:44 [TaskTracker] INFO mortbay.log : jetty-6.1.14 [sf-startdaemon-debug] 09/02/10 14:39:44 [TaskTracker] INFO mortbay.log : Extract jar:file:/home/slo/.ivy/cache/org.apache.hadoop/hadoop-core/jars/hadoop-core-0.21.0-alpha-4.jar!/webapps/task to /tmp/Jetty_0_0_0_0_50060_task____.2vcltf/webapp [sf-startdaemon-debug] 09/02/10 14:39:44 [JobTracker] INFO mortbay.log : jetty-6.1.14 [sf-startdaemon-debug] 09/02/10 14:39:44 [JobTracker] INFO mortbay.log : Extract jar:file:/home/slo/.ivy/cache/org.apache.hadoop/hadoop-core/jars/hadoop-core-0.21.0-alpha-4.jar!/webapps/job to /tmp/Jetty_0_0_0_0_50030_job____yn7qmk/webapp [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO mortbay.log : Started SelectChannelConnector@0.0.0.0:50060 [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO jvm.JvmMetrics : Cannot initialize JVM Metrics with processName=TaskTracker, sessionId= - already initialized [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO metrics.RpcMetrics : Initializing RPC Metrics with hostName=ExtTaskTracker, port=50642 [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server listener on 50642] INFO ipc.Server : IPC Server listener on 50642: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 0 on 50642] INFO ipc.Server : IPC Server handler 0 on 50642: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 1 on 50642] INFO ipc.Server : IPC Server handler 1 on 50642: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server Responder] INFO ipc.Server : IPC Server Responder: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 2 on 50642] INFO ipc.Server : IPC Server handler 2 on 50642: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO mapred.TaskTracker : TaskTracker up at: localhost/127.0.0.1:50642 [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 3 on 50642] INFO ipc.Server : IPC Server handler 3 on 50642: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO mapred.TaskTracker : Starting tracker tracker_annecy:localhost/127.0.0.1:50642 [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mortbay.log : Started SelectChannelConnector@0.0.0.0:50030 [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO jvm.JvmMetrics : Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mapred.JobTracker : JobTracker up at: 8012 [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mapred.JobTracker : JobTracker webserver: 50030 [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mapred.JobTracker : Cleaning up the system directory [sf-startdaemon-debug] 2009/02/10 14:39:45:472 GMT [INFO ][JobTracker] HOST annecy:rootProcess:testJobSubmission - JobTracker deployment complete: service is: JobTracker instance org.apache.hadoop.mapred.ExtJobTracker@174cb21 in state STARTED [sf-startdaemon-debug] 2009/02/10 14:39:45:472 GMT [INFO ][JobTracker] HOST annecy:rootProcess:testJobSubmission - Filesystem URL hdfs://127.0.0.1:8020/ [sf-startdaemon-debug] 2009/02/10 14:39:45:473 GMT [INFO ][JobTracker] HOST annecy:rootProcess:testJobSubmission - Filesystem Name is hdfs://localhost [sf-startdaemon-debug] 2009/02/10 14:39:45:473 GMT [INFO ][JobTracker] HOST annecy:rootProcess:testJobSubmission - Filesystem is DFS[DFSClient[clientName=DFSClient_-1964574854, ugi=slo,slo,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare]] [sf-startdaemon-debug] 2009/02/10 14:39:45:473 GMT [INFO ][JobTracker] HOST annecy:rootProcess:testJobSubmission - System dir is hdfs://localhost/tmp/hadoop/mapred/system [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mapred.ExtJobTracker : State change: JobTracker is now LIVE [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mapred.JobTracker : Restoration complete [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mapred.JobTracker : Starting interTrackerServer [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server Responder] INFO ipc.Server : IPC Server Responder: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server listener on 8012] INFO ipc.Server : IPC Server listener on 8012: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 0 on 8012] INFO ipc.Server : IPC Server handler 0 on 8012: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 1 on 8012] INFO ipc.Server : IPC Server handler 1 on 8012: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 2 on 8012] INFO ipc.Server : IPC Server handler 2 on 8012: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 3 on 8012] INFO ipc.Server : IPC Server handler 3 on 8012: starting [sf-startdaemon-debug] 09/02/10 14:39:45 [JobTracker] INFO mapred.JobTracker : Starting RUNNING [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO mapred.TaskTracker : Using MemoryCalculatorPlugin : org.apache.hadoop.util.LinuxMemoryCalculatorPlugin@b3c24f [sf-startdaemon-debug] 09/02/10 14:39:45 [Map-events fetcher for all reduce tasks on tracker_annecy:localhost/127.0.0.1:50642] INFO mapred.TaskTracker : Starting thread: Map-events fetcher for all reduce tasks on tracker_annecy:localhost/127.0.0.1:50642 [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO util.ProcessTree : setsid exited with exit code 0 [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] WARN mapred.TaskTracker : TaskTracker's reservedVmem is not configured. [sf-startdaemon-debug] TaskTracker's defaultMaxVmPerTask is not configured. [sf-startdaemon-debug] TaskTracker's limitMaxVmPerTask is not configured. [sf-startdaemon-debug] TaskMemoryManager is disabled. [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO mapred.IndexCache : IndexCache created with max memory = 10485760 [sf-startdaemon-debug] 2009/02/10 14:39:45:554 GMT [INFO ][TaskTracker] HOST annecy:rootProcess:testJobSubmission - TaskTracker deployment complete: service is: tracker_annecy:localhost/127.0.0.1:50642 instance org.apache.hadoop.mapred.ExtTaskTracker@1ef3a22 in state STARTED; web port=50060 [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO mapred.ExtTaskTracker : Task Tracker Service is being offered: tracker_annecy:localhost/127.0.0.1:50642 instance org.apache.hadoop.mapred.ExtTaskTracker@1ef3a22 in state STARTED; web port=50060 [sf-startdaemon-debug] 09/02/10 14:39:45 [IPC Server handler 3 on 8012] INFO net.NetworkTopology : Adding a new node: /default-rack/annecy [sf-startdaemon-debug] 09/02/10 14:39:45 [TaskTracker] INFO mapred.ExtTaskTracker : State change: TaskTracker is now LIVE [sf-startdaemon-debug] 2009/02/10 14:39:45:953 GMT [INFO ][Thread-86] HOST annecy:rootProcess:testJobSubmission - Connecting to localhost:8012 [sf-startdaemon-debug] 09/02/10 14:39:46 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@128215d] INFO datanode.DataNode : Receiving block blk_-8858207151156913761_1001 src: /127.0.0.1:58886 dest: /127.0.0.1:8024 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_-8858207151156913761_1001] INFO datanode.DataNode : Received block blk_-8858207151156913761_1001 of size 83 from /127.0.0.1:58886 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_-8858207151156913761_1001] INFO datanode.DataNode : PacketResponder 0 for block blk_-8858207151156913761_1001 terminating [sf-startdaemon-debug] 2009/02/10 14:39:46:330 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Path /tests/mrtestsequence/in/in.txt size 0 last modified:1234276786154 [sf-startdaemon-debug] 2009/02/10 14:39:46:354 GMT [INFO ][Thread-103] HOST annecy:rootProcess:testJobSubmission - in.txt [sf-startdaemon-debug] size=0 [sf-startdaemon-debug] replication=0 [sf-startdaemon-debug] last modified=Tue Feb 10 14:39:46 GMT 2009 [sf-startdaemon-debug] owner=slo [sf-startdaemon-debug] group=supergroup [sf-startdaemon-debug] permissions=rwxr-xr-x [sf-startdaemon-debug] [sf-startdaemon-debug] 2009/02/10 14:39:46:355 GMT [INFO ][Thread-103] HOST annecy:rootProcess:testJobSubmission - Files: 1 total size=0 [sf-startdaemon-debug] 2009/02/10 14:39:46:554 GMT [INFO ][Thread-110] HOST annecy:rootProcess:testJobSubmission - Submitting to localhost:8012 [sf-startdaemon-debug] 09/02/10 14:39:46 [Thread-110] WARN mapred.JobClient : Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. [sf-startdaemon-debug] 09/02/10 14:39:46 [Thread-110] WARN mapred.JobClient : No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). [sf-startdaemon-debug] 09/02/10 14:39:46 [Thread-110] INFO input.FileInputFormat : Total input paths to process : 1 [sf-startdaemon-debug] 09/02/10 14:39:46 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@41c977] INFO datanode.DataNode : Receiving block blk_-5471542218678066410_1002 src: /127.0.0.1:58887 dest: /127.0.0.1:8024 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_-5471542218678066410_1002] INFO datanode.DataNode : Received block blk_-5471542218678066410_1002 of size 130 from /127.0.0.1:58887 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_-5471542218678066410_1002] INFO datanode.DataNode : PacketResponder 0 for block blk_-5471542218678066410_1002 terminating [sf-startdaemon-debug] 09/02/10 14:39:46 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b6a053] INFO datanode.DataNode : Receiving block blk_-8277688451617637267_1003 src: /127.0.0.1:58888 dest: /127.0.0.1:8024 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_-8277688451617637267_1003] INFO datanode.DataNode : Received block blk_-8277688451617637267_1003 of size 21949 from /127.0.0.1:58888 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_-8277688451617637267_1003] INFO datanode.DataNode : PacketResponder 0 for block blk_-8277688451617637267_1003 terminating [sf-startdaemon-debug] 2009/02/10 14:39:46:791 GMT [INFO ][Thread-110] HOST annecy:rootProcess:testJobSubmission - Job ID: job_200902101439_0001 URL: http://localhost:50030/jobdetails.jsp?jobid=job_200902101439_0001 [sf-startdaemon-debug] 09/02/10 14:39:46 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@1f1e666] INFO datanode.DataNode : Receiving block blk_1278892433984751323_1006 src: /127.0.0.1:58890 dest: /127.0.0.1:8024 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_1278892433984751323_1006] INFO datanode.DataNode : Received block blk_1278892433984751323_1006 of size 21298 from /127.0.0.1:58890 [sf-startdaemon-debug] 09/02/10 14:39:46 [PacketResponder 0 for Block blk_1278892433984751323_1006] INFO datanode.DataNode : PacketResponder 0 for block blk_1278892433984751323_1006 terminating [sf-startdaemon-debug] 09/02/10 14:39:47 [org.apache.hadoop.hdfs.server.datanode.DataXceiver@155d4be] INFO datanode.DataNode : Receiving block blk_-1095551757252083502_1007 src: /127.0.0.1:58891 dest: /127.0.0.1:8024 [sf-startdaemon-debug] 09/02/10 14:39:47 [PacketResponder 0 for Block blk_-1095551757252083502_1007] INFO datanode.DataNode : Received block blk_-1095551757252083502_1007 of size 21298 from /127.0.0.1:58891 [sf-startdaemon-debug] 09/02/10 14:39:47 [PacketResponder 0 for Block blk_-1095551757252083502_1007] INFO datanode.DataNode : PacketResponder 0 for block blk_-1095551757252083502_1007 terminating [sf-startdaemon-debug] 09/02/10 14:39:47 [initJobs] INFO mapred.JobInProgress : Input size for job job_200902101439_0001 = 0 [sf-startdaemon-debug] 09/02/10 14:39:47 [initJobs] INFO mapred.JobInProgress : Split info for job:job_200902101439_0001 with 1 splits: [sf-startdaemon-debug] 09/02/10 14:39:48 [IPC Server handler 2 on 8012] INFO mapred.JobTracker : Adding task 'attempt_200902101439_0001_m_000002_0' to tip task_200902101439_0001_m_000002, for tracker 'tracker_annecy:localhost/127.0.0.1:50642' [sf-startdaemon-debug] 09/02/10 14:39:48 [TaskTracker] INFO mapred.TaskTracker : LaunchTaskAction (registerTask): attempt_200902101439_0001_m_000002_0 [sf-startdaemon-debug] 09/02/10 14:39:48 [TaskLauncher for task] INFO mapred.TaskTracker : Trying to launch : attempt_200902101439_0001_m_000002_0 [sf-startdaemon-debug] 09/02/10 14:39:48 [TaskLauncher for task] INFO mapred.TaskTracker : In TaskLauncher, current free slots : 2 and trying to launch attempt_200902101439_0001_m_000002_0 [sf-startdaemon-debug] 09/02/10 14:39:48 [Thread-142] INFO mapred.JvmManager : In JvmRunner constructed JVM ID: jvm_200902101439_0001_m_1914432792 [sf-startdaemon-debug] 09/02/10 14:39:48 [Thread-142] INFO mapred.JvmManager : JVM Runner jvm_200902101439_0001_m_1914432792 spawned. [sf-startdaemon-debug] 09/02/10 14:39:49 [IPC Server handler 1 on 50642] INFO mapred.TaskTracker : JVM with ID: jvm_200902101439_0001_m_1914432792 given task: attempt_200902101439_0001_m_000002_0 [sf-startdaemon-debug] 09/02/10 14:39:49 [ExtDataNode] INFO datanode.DataNode : BlockReport of 5 blocks got processed in 2 msecs [sf-startdaemon-debug] 09/02/10 14:39:49 [IPC Server handler 2 on 50642] INFO mapred.TaskTracker : attempt_200902101439_0001_m_000002_0 0.0% setup [sf-startdaemon-debug] 09/02/10 14:39:49 [IPC Server handler 3 on 50642] INFO mapred.TaskTracker : Task attempt_200902101439_0001_m_000002_0 is done. [sf-startdaemon-debug] 09/02/10 14:39:49 [IPC Server handler 3 on 50642] INFO mapred.TaskTracker : reported output size for attempt_200902101439_0001_m_000002_0 was 0 [sf-startdaemon-debug] 09/02/10 14:39:49 [Thread-142] INFO mapred.TaskTracker : Error cleaning up task runner: java.lang.NullPointerException [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.cleanup(TaskTracker.java:2524) [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.taskFinished(TaskTracker.java:2309) [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskTracker.reportTaskFinished(TaskTracker.java:2723) [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:459) [sf-startdaemon-debug] [sf-startdaemon-debug] 09/02/10 14:39:49 [Thread-142] INFO mapred.TaskTracker : addFreeSlot : current free slots : 2 [sf-startdaemon-debug] 09/02/10 14:39:49 [JVM Runner jvm_200902101439_0001_m_1914432792 spawned.] INFO mapred.JvmManager : JVM : jvm_200902101439_0001_m_1914432792 exited. Number of tasks it ran: 1 [sf-startdaemon-debug] 09/02/10 14:39:51 [TaskTracker] INFO mapred.TaskTracker : org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_200902101439_0001/attempt_200902101439_0001_m_000002_0/output/file.out in any of the configured local directories [sf-startdaemon-debug] 09/02/10 14:39:51 [IPC Server handler 3 on 8012] INFO mapred.JobInProgress : Task 'attempt_200902101439_0001_m_000002_0' has completed task_200902101439_0001_m_000002 successfully. [sf-startdaemon-debug] 09/02/10 14:39:51 [IPC Server handler 3 on 8012] INFO mapred.JobInProgress : Choosing a non-local task task_200902101439_0001_m_000000 [sf-startdaemon-debug] 09/02/10 14:39:51 [IPC Server handler 3 on 8012] INFO mapred.JobTracker : Adding task 'attempt_200902101439_0001_m_000000_0' to tip task_200902101439_0001_m_000000, for tracker 'tracker_annecy:localhost/127.0.0.1:50642' [sf-startdaemon-debug] 09/02/10 14:39:51 [TaskTracker] INFO mapred.TaskTracker : LaunchTaskAction (registerTask): attempt_200902101439_0001_m_000000_0 [sf-startdaemon-debug] 09/02/10 14:39:51 [TaskLauncher for task] INFO mapred.TaskTracker : Trying to launch : attempt_200902101439_0001_m_000000_0 [sf-startdaemon-debug] 09/02/10 14:39:51 [TaskLauncher for task] INFO mapred.TaskTracker : In TaskLauncher, current free slots : 2 and trying to launch attempt_200902101439_0001_m_000000_0 [sf-startdaemon-debug] 09/02/10 14:39:51 [Thread-150] INFO mapred.JvmManager : In JvmRunner constructed JVM ID: jvm_200902101439_0001_m_-1171466319 [sf-startdaemon-debug] 09/02/10 14:39:51 [Thread-150] INFO mapred.JvmManager : JVM Runner jvm_200902101439_0001_m_-1171466319 spawned. [sf-startdaemon-debug] 09/02/10 14:39:52 [IPC Server handler 1 on 50642] INFO mapred.TaskTracker : JVM with ID: jvm_200902101439_0001_m_-1171466319 given task: attempt_200902101439_0001_m_000000_0 [sf-startdaemon-debug] 09/02/10 14:39:52 [JVM Runner jvm_200902101439_0001_m_-1171466319 spawned.] INFO mapred.JvmManager : JVM : jvm_200902101439_0001_m_-1171466319 exited. Number of tasks it ran: 0 [sf-startdaemon-debug] 09/02/10 14:39:54 [IPC Server handler 0 on 8012] INFO mapred.TaskInProgress : Error from attempt_200902101439_0001_m_000000_0: java.io.IOException: Invalid "io.sort.record.percent": 0.0 [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:648) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:423) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:493) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:293) [sf-startdaemon-debug] at org.apache.hadoop.mapred.Child.main(Child.java:151) [sf-startdaemon-debug] [sf-startdaemon-debug] 09/02/10 14:39:55 [Thread-150] INFO mapred.TaskTracker : addFreeSlot : current free slots : 2 [sf-startdaemon-debug] 09/02/10 14:39:57 [IPC Server handler 1 on 8012] INFO mapred.JobInProgress : Choosing a non-local task task_200902101439_0001_m_000000 [sf-startdaemon-debug] 09/02/10 14:39:57 [IPC Server handler 1 on 8012] INFO mapred.JobTracker : Adding task 'attempt_200902101439_0001_m_000000_1' to tip task_200902101439_0001_m_000000, for tracker 'tracker_annecy:localhost/127.0.0.1:50642' [sf-startdaemon-debug] 09/02/10 14:39:57 [IPC Server handler 1 on 8012] INFO mapred.JobTracker : Removed completed task 'attempt_200902101439_0001_m_000000_0' from 'tracker_annecy:localhost/127.0.0.1:50642' [sf-startdaemon-debug] 09/02/10 14:39:57 [TaskTracker] INFO mapred.TaskTracker : LaunchTaskAction (registerTask): attempt_200902101439_0001_m_000000_1 [sf-startdaemon-debug] 09/02/10 14:39:57 [TaskLauncher for task] INFO mapred.TaskTracker : Trying to launch : attempt_200902101439_0001_m_000000_1 [sf-startdaemon-debug] 09/02/10 14:39:57 [TaskLauncher for task] INFO mapred.TaskTracker : In TaskLauncher, current free slots : 2 and trying to launch attempt_200902101439_0001_m_000000_1 [sf-startdaemon-debug] 09/02/10 14:39:57 [Thread-160] INFO mapred.JvmManager : In JvmRunner constructed JVM ID: jvm_200902101439_0001_m_2056572577 [sf-startdaemon-debug] 09/02/10 14:39:57 [Thread-160] INFO mapred.JvmManager : JVM Runner jvm_200902101439_0001_m_2056572577 spawned. [sf-startdaemon-debug] 09/02/10 14:39:58 [IPC Server handler 0 on 50642] INFO mapred.TaskTracker : JVM with ID: jvm_200902101439_0001_m_2056572577 given task: attempt_200902101439_0001_m_000000_1 [sf-startdaemon-debug] 09/02/10 14:39:58 [JVM Runner jvm_200902101439_0001_m_2056572577 spawned.] INFO mapred.JvmManager : JVM : jvm_200902101439_0001_m_2056572577 exited. Number of tasks it ran: 0 [sf-startdaemon-debug] 09/02/10 14:39:59 [ExtDataNode] INFO datanode.DataNode : BlockReport of 5 blocks got processed in 5 msecs [sf-startdaemon-debug] 09/02/10 14:40:00 [IPC Server handler 2 on 8012] INFO mapred.TaskInProgress : Error from attempt_200902101439_0001_m_000000_1: java.io.IOException: Invalid "io.sort.record.percent": 0.0 [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:648) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:423) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:493) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:293) [sf-startdaemon-debug] at org.apache.hadoop.mapred.Child.main(Child.java:151) [sf-startdaemon-debug] [sf-startdaemon-debug] 09/02/10 14:40:01 [Thread-160] INFO mapred.TaskTracker : addFreeSlot : current free slots : 2 [sf-startdaemon-debug] 2009/02/10 14:40:02:564 GMT [INFO ][LivenessSender_HOST annecy:rootProcess:testJobSubmission] HOST annecy:rootProcess:testJobSubmission - Task Id : attempt_200902101439_0001_m_000002_0, Status : SUCCEEDED [sf-startdaemon-debug] 2009/02/10 14:40:02:566 GMT [INFO ][LivenessSender_HOST annecy:rootProcess:testJobSubmission] HOST annecy:rootProcess:testJobSubmission - Task Id : attempt_200902101439_0001_m_000000_0, Status : FAILED [sf-startdaemon-debug] 2009/02/10 14:40:02:567 GMT [INFO ][LivenessSender_HOST annecy:rootProcess:testJobSubmission] HOST annecy:rootProcess:testJobSubmission - Task Id : attempt_200902101439_0001_m_000002_0, Status : SUCCEEDED [sf-startdaemon-debug] 2009/02/10 14:40:02:568 GMT [INFO ][LivenessSender_HOST annecy:rootProcess:testJobSubmission] HOST annecy:rootProcess:testJobSubmission - Task Id : attempt_200902101439_0001_m_000000_0, Status : FAILED [sf-startdaemon-debug] 2009/02/10 14:40:03:246 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Initiating DataNode termination; serviceThread=Thread[DataNode,5,] service=DataNode {data=FSDataset{dirpath='/tmp/tempdir63916/current'}, localName='127.0.0.1:8024', storageID='DS-562223574-127.0.1.1-8024-1234276779531', xmitsInProgress=0, state=LIVE} [sf-startdaemon-debug] 2009/02/10 14:40:03:246 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating deployer thread [sf-startdaemon-debug] 2009/02/10 14:40:03:247 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Requesting thread termination [sf-startdaemon-debug] 2009/02/10 14:40:03:247 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - waiting for thread to finish [sf-startdaemon-debug] 2009/02/10 14:40:03:248 GMT [WARN ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Hadoop Service thread did not terminate within the expected shutdown period. Service is DataNode {data=FSDataset{dirpath='/tmp/tempdir63916/current'}, localName='127.0.0.1:8024', storageID='DS-562223574-127.0.1.1-8024-1234276779531', xmitsInProgress=0, state=LIVE} [sf-startdaemon-debug] 2009/02/10 14:40:03:248 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating hadoopService service ExtDataNode [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO datanode.DataNode : State change: DataNode is now CLOSED [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO datanode.DataNode : Terminating ExtDataNode [sf-startdaemon-debug] 2009/02/10 14:40:03:260 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Initiating TaskTracker termination; serviceThread=Thread[TaskTracker,5,RMI Runtime] service=tracker_annecy:localhost/127.0.0.1:50642 instance org.apache.hadoop.mapred.ExtTaskTracker@1ef3a22 in state LIVE; web port=50060 [sf-startdaemon-debug] 2009/02/10 14:40:03:261 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating deployer thread [sf-startdaemon-debug] 2009/02/10 14:40:03:261 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Requesting thread termination [sf-startdaemon-debug] 09/02/10 14:40:03 [TaskTracker] INFO mapred.TaskTracker : Interrupted. Closing down. [sf-startdaemon-debug] 2009/02/10 14:40:03:260 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Initiating NameNode termination; serviceThread=Thread[NameNode,5,] service=ExtNameNode instance ExtNameNode instance org.apache.hadoop.hdfs.server.namenode.ExtNameNode@1f0b7d3 in state LIVE in state LIVE at localhost/127.0.0.1:8021 , , IPC localhost/127.0.0.1:8020 [sf-startdaemon-debug] 2009/02/10 14:40:03:262 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating deployer thread [sf-startdaemon-debug] 2009/02/10 14:40:03:263 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Requesting thread termination [sf-startdaemon-debug] 2009/02/10 14:40:03:263 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - waiting for thread to finish [sf-startdaemon-debug] 2009/02/10 14:40:03:265 GMT [WARN ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Hadoop Service thread did not terminate within the expected shutdown period. Service is ExtNameNode instance ExtNameNode instance org.apache.hadoop.hdfs.server.namenode.ExtNameNode@1f0b7d3 in state LIVE in state LIVE at localhost/127.0.0.1:8021 , , IPC localhost/127.0.0.1:8020 [sf-startdaemon-debug] 2009/02/10 14:40:03:265 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating hadoopService service ExtNameNode [sf-startdaemon-debug] 2009/02/10 14:40:03:262 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Initiating JobTracker termination; serviceThread=Thread[JobTracker,5,RMI Runtime] service=JobTracker instance org.apache.hadoop.mapred.ExtJobTracker@174cb21 in state LIVE [sf-startdaemon-debug] 2009/02/10 14:40:03:262 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - waiting for thread to finish [sf-startdaemon-debug] 2009/02/10 14:40:03:267 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating deployer thread [sf-startdaemon-debug] 2009/02/10 14:40:03:267 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Requesting thread termination [sf-startdaemon-debug] 2009/02/10 14:40:03:268 GMT [INFO ][JobTracker] HOST annecy:rootProcess:testJobSubmission - Exiting JobTracker worker thread; service is JobTracker instance org.apache.hadoop.mapred.ExtJobTracker@174cb21 in state LIVE [sf-startdaemon-debug] 2009/02/10 14:40:03:270 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - waiting for thread to finish [sf-startdaemon-debug] 2009/02/10 14:40:03:270 GMT [WARN ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Hadoop Service thread did not terminate within the expected shutdown period. Service is JobTracker instance org.apache.hadoop.mapred.ExtJobTracker@174cb21 in state LIVE [sf-startdaemon-debug] 2009/02/10 14:40:03:270 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating hadoopService service JobTracker [sf-startdaemon-debug] 2009/02/10 14:40:03:273 GMT [WARN ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Hadoop Service thread did not terminate within the expected shutdown period. Service is tracker_annecy:localhost/127.0.0.1:50642 instance org.apache.hadoop.mapred.ExtTaskTracker@1ef3a22 in state LIVE; web port=50060 [sf-startdaemon-debug] 2009/02/10 14:40:03:273 GMT [INFO ][TerminatorThread] HOST annecy:rootProcess:testJobSubmission - Terminating hadoopService service tracker_annecy:localhost/127.0.0.1:50642 [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO namenode.NameNode : State change: NameNode is now CLOSED [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.ExtJobTracker : State change: JobTracker is now CLOSED [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.ExtTaskTracker : State change: TaskTracker is now CLOSED [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO namenode.NameNode : Closing NameNode [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.TaskRunner : attempt_200902101439_0001_m_000002_0 done; removing files. [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.TaskTracker : Error cleaning up task runner: java.lang.NullPointerException [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.cleanup(TaskTracker.java:2514) [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.jobHasFinished(TaskTracker.java:2412) [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskTracker.closeTaskTracker(TaskTracker.java:984) [sf-startdaemon-debug] at org.apache.hadoop.mapred.TaskTracker.innerClose(TaskTracker.java:942) [sf-startdaemon-debug] at org.apache.hadoop.util.Service.close(Service.java:313) [sf-startdaemon-debug] at org.smartfrog.services.hadoop.components.cluster.HadoopServiceImpl.terminateService(HadoopServiceImpl.java:362) [sf-startdaemon-debug] at org.smartfrog.services.hadoop.components.cluster.HadoopServiceImpl.terminateHadoopService(HadoopServiceImpl.java:344) [sf-startdaemon-debug] at org.smartfrog.services.hadoop.components.cluster.HadoopServiceImpl.sfTerminateWith(HadoopServiceImpl.java:109) [sf-startdaemon-debug] at org.smartfrog.sfcore.prim.PrimImpl.terminateNotifying(PrimImpl.java:1243) [sf-startdaemon-debug] at org.smartfrog.sfcore.prim.PrimImpl.sfTerminateQuietlyWith(PrimImpl.java:1356) [sf-startdaemon-debug] at org.smartfrog.sfcore.common.TerminatorThread.execute(TerminatorThread.java:211) [sf-startdaemon-debug] at org.smartfrog.sfcore.utils.SmartFrogThread.run(SmartFrogThread.java:279) [sf-startdaemon-debug] [sf-startdaemon-debug] 09/02/10 14:40:03 [Map-events fetcher for all reduce tasks on tracker_annecy:localhost/127.0.0.1:50642] INFO mapred.TaskTracker : Shutting down: Map-events fetcher for all reduce tasks on tracker_annecy:localhost/127.0.0.1:50642 [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.JobTracker : Stopping infoServer [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO ipc.Server : Stopping server on 50642 [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 0 on 50642] INFO ipc.Server : IPC Server handler 0 on 50642: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 1 on 50642] INFO ipc.Server : IPC Server handler 1 on 50642: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 2 on 50642] INFO ipc.Server : IPC Server handler 2 on 50642: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 3 on 50642] INFO ipc.Server : IPC Server handler 3 on 50642: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server Responder] INFO ipc.Server : Stopping IPC Server Responder [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server listener on 50642] INFO ipc.Server : Stopping IPC Server listener on 50642 [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.TaskTracker : Shutting down StatusHttpServer [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO ipc.Server : Stopping server on 50020 [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 0 on 50020] INFO ipc.Server : IPC Server handler 0 on 50020: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 1 on 50020] INFO ipc.Server : IPC Server handler 1 on 50020: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server listener on 50020] INFO ipc.Server : Stopping IPC Server listener on 50020 [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server Responder] INFO ipc.Server : Stopping IPC Server Responder [sf-startdaemon-debug] 09/02/10 14:40:03 [org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor@3fbbfc] INFO namenode.DecommissionManager : Interrupted Monitor [sf-startdaemon-debug] java.lang.InterruptedException: sleep interrupted [sf-startdaemon-debug] at java.lang.Thread.sleep(Native Method) [sf-startdaemon-debug] at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65) [sf-startdaemon-debug] at java.lang.Thread.run(Thread.java:619) [sf-startdaemon-debug] 09/02/10 14:40:03 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$ReplicationMonitor@15b0333] WARN namenode.FSNamesystem : ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO datanode.DataNode : Waiting for threadgroup to exit, active threads is 1 [sf-startdaemon-debug] 09/02/10 14:40:03 [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@10952e8] WARN datanode.DataNode : DatanodeRegistration(127.0.0.1:8024, storageID=DS-562223574-127.0.1.1-8024-1234276779531, infoPort=8022, ipcPort=50020):DataXceiveServer: java.nio.channels.ClosedChannelException [sf-startdaemon-debug] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:103) [sf-startdaemon-debug] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) [sf-startdaemon-debug] at java.lang.Thread.run(Thread.java:619) [sf-startdaemon-debug] [sf-startdaemon-debug] 09/02/10 14:40:03 [org.apache.hadoop.hdfs.server.datanode.DataBlockScanner@d36ff3] INFO datanode.DataBlockScanner : Exiting DataBlockScanner thread. [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO ipc.Server : Stopping server on 8020 [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 0 on 8020] INFO ipc.Server : IPC Server handler 0 on 8020: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 1 on 8020] INFO ipc.Server : IPC Server handler 1 on 8020: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 2 on 8020] INFO ipc.Server : IPC Server handler 2 on 8020: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 3 on 8020] INFO ipc.Server : IPC Server handler 3 on 8020: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server listener on 8020] INFO ipc.Server : Stopping IPC Server listener on 8020 [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server Responder] INFO ipc.Server : Stopping IPC Server Responder [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.JobTracker : Stopping interTrackerServer [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO ipc.Server : Stopping server on 8012 [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 0 on 8012] INFO ipc.Server : IPC Server handler 0 on 8012: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 3 on 8012] INFO ipc.Server : IPC Server handler 3 on 8012: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 2 on 8012] INFO ipc.Server : IPC Server handler 2 on 8012: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server listener on 8012] INFO ipc.Server : Stopping IPC Server listener on 8012 [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server handler 1 on 8012] INFO ipc.Server : IPC Server handler 1 on 8012: exiting [sf-startdaemon-debug] 09/02/10 14:40:03 [IPC Server Responder] INFO ipc.Server : Stopping IPC Server Responder [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.JobTracker : Stopping expireTrackersThread [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.JobTracker : Stopping retirer [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.EagerTaskInitializationListener : Stopping initer [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.JobTracker : Stopping expireLaunchingTasks [sf-startdaemon-debug] 09/02/10 14:40:03 [TerminatorThread] INFO mapred.JobTracker : stopped all jobtracker services [sf-startdaemon-debug] 09/02/10 14:40:03 [ExtDataNode] INFO datanode.DataNode : DatanodeRegistration(127.0.0.1:8024, storageID=DS-562223574-127.0.1.1-8024-1234276779531, infoPort=8022, ipcPort=50020):Finishing DataNode in: FSDataset{dirpath='/tmp/tempdir63916/current'} [sf-system-test-junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 38.564 sec [sf-system-test-junit] [sf-system-test-junit] Testcase: testJobSubmission took 38.519 sec [sf-system-test-junit] Caused an ERROR [sf-system-test-junit] Timeout waiting for a test run to complete [sf-system-test-junit] Event history has 3 events [sf-system-test-junit] (unknown) -DeployedEvent at Tue Feb 10 14:39:32 GMT 2009 alive: true [sf-system-test-junit] (unknown) -StartedEvent at Tue Feb 10 14:39:32 GMT 2009 alive: true [sf-system-test-junit] (unknown) -TestStartedEvent at Tue Feb 10 14:39:32 GMT 2009 alive: true [sf-system-test-junit] [sf-system-test-junit] - time limit: 30000 [sf-system-test-junit] TestTimeoutException:: Timeout waiting for a test run to complete [sf-system-test-junit] Event history has 3 events [sf-system-test-junit] (unknown) -DeployedEvent at Tue Feb 10 14:39:32 GMT 2009 alive: true [sf-system-test-junit] (unknown) -StartedEvent at Tue Feb 10 14:39:32 GMT 2009 alive: true [sf-system-test-junit] (unknown) -TestStartedEvent at Tue Feb 10 14:39:32 GMT 2009 alive: true [sf-system-test-junit] [sf-system-test-junit] - time limit: 30000, SmartFrog 3.17.005dev (2009-02-10 09:45:44 GMT) [sf-system-test-junit] at org.smartfrog.services.assertions.events.TestEventSink.runTestsToCompletion(TestEventSink.java:449) [sf-system-test-junit] at org.smartfrog.test.DeployingTestBase.runTestDeployment(DeployingTestBase.java:259) [sf-system-test-junit] at org.smartfrog.test.DeployingTestBase.completeTestDeployment(DeployingTestBase.java:302) [sf-system-test-junit] at org.smartfrog.test.DeployingTestBase.runTestsToCompletion(DeployingTestBase.java:338) [sf-system-test-junit] at org.smartfrog.test.DeployingTestBase.expectSuccessfulTestRunOrSkip(DeployingTestBase.java:439) [sf-system-test-junit] at org.smartfrog.services.hadoop.test.system.local.tracker.JobSubmissionTest.testJobSubmission(JobSubmissionTest.java:40) [sf-system-test-junit] [sf-system-test-junit] Test org.smartfrog.services.hadoop.test.system.local.tracker.JobSubmissionTest FAILED [sf-stopdaemon] > extend the job submission test, list input and output directories and have it do real work > ------------------------------------------------------------------------------------------ > > Key: SFOS-1085 > URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 > Project: SmartFrog > Issue Type: Task > Components: _service_hadoop > Affects Versions: 3.17.004 > Reporter: Steve Loughran > Assignee: Steve Loughran > -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |
From: Steve L. (JIRA) <ji...@sm...> - 2009-02-10 23:08:10
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11533#action_11533 ] Steve Loughran commented on SFOS-1085: -------------------------------------- getting port in use errors, no specifics on which port is playing up [sf-startdaemon-debug] at org.smartfrog.sfcore.utils.SmartFrogThread.run(SmartFrogThread.java:279) [sf-startdaemon-debug] at org.smartfrog.sfcore.utils.WorkflowThread.run(WorkflowThread.java:117) [sf-startdaemon-debug] 09/02/10 23:04:55 [JobTracker] WARN mortbay.log : failed Server@1422384 [sf-startdaemon-debug] java.net.BindException: Address already in use [sf-startdaemon-debug] at sun.nio.ch.Net.bind(Native Method) [sf-startdaemon-debug] at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119) [sf-startdaemon-debug] at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) [sf-startdaemon-debug] at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) [sf-startdaemon-debug] at org.mortbay.jetty.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:315) [sf-startdaemon-debug] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) [sf-startdaemon-debug] at org.mortbay.jetty.Server.doStart(Server.java:233) [sf-startdaemon-debug] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) [sf-startdaemon-debug] at org.apache.hadoop.http.HttpServer.start(HttpServer.java:417) [sf-startdaemon-debug] at org.apache.hadoop.mapred.JobTracker.innerStart(JobTracker.java:1429) [sf-startdaemon-debug] at org.apache.hadoop.util.Service.start(Service.java:186) [sf-startdaemon-debug] at org.smartfrog.services.hadoop.components.cluster.HadoopServiceImpl.innerDeploy(HadoopServiceImpl.java:503) > extend the job submission test, list input and output directories and have it do real work > ------------------------------------------------------------------------------------------ > > Key: SFOS-1085 > URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 > Project: SmartFrog > Issue Type: Task > Components: _service_hadoop > Affects Versions: 3.17.004 > Reporter: Steve Loughran > Assignee: Steve Loughran > -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |
From: Steve L. (JIRA) <ji...@sm...> - 2009-02-13 12:40:34
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11535#action_11535 ] Steve Loughran commented on SFOS-1085: -------------------------------------- currently failing with no input file [sf-startdaemon-debug] 09/02/13 11:42:06 [JVM Runner jvm_200902131141_0001_m_203057136 spawned.] INFO mapred.JvmManager : JVM : jvm_200902131141_0001_m_203057136 exited. Number of tasks it ran: 0 [sf-startdaemon-debug] 09/02/13 11:42:06 [IPC Server handler 3 on 8012] INFO mapred.TaskInProgress : Error from attempt_200902131141_0001_m_000000_1: java.io.IOException: Cannot open filename /tests/mrtestsequence/in/in.txt [sf-startdaemon-debug] at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1352) [sf-startdaemon-debug] at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1343) [sf-startdaemon-debug] at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:312) [sf-startdaemon-debug] at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:177) [sf-startdaemon-debug] at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:347) [sf-startdaemon-debug] at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:67) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:412) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:510) [sf-startdaemon-debug] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:303) [sf-startdaemon-debug] at org.apache.hadoop.mapred.Child.main(Child.java:155) [sf-startdaemon-debug] 2009/02/13 11:42:08:181 GMT [INFO ][LivenessSender_HOST mor > extend the job submission test, list input and output directories and have it do real work > ------------------------------------------------------------------------------------------ > > Key: SFOS-1085 > URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 > Project: SmartFrog > Issue Type: Task > Components: _service_hadoop > Affects Versions: 3.17.004 > Reporter: Steve Loughran > Assignee: Steve Loughran > -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |
From: Steve L. (JIRA) <ji...@sm...> - 2009-02-13 12:46:01
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11536#action_11536 ] Steve Loughran commented on SFOS-1085: -------------------------------------- tuning of timeout code (when we start counting, etc) means that the reporting is more reiiable; the test failure is getting all the way down. > extend the job submission test, list input and output directories and have it do real work > ------------------------------------------------------------------------------------------ > > Key: SFOS-1085 > URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 > Project: SmartFrog > Issue Type: Task > Components: _service_hadoop > Affects Versions: 3.17.004 > Reporter: Steve Loughran > Assignee: Steve Loughran > -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |
From: Steve L. (JIRA) <ji...@sm...> - 2009-02-25 16:36:52
|
[ http://jira.smartfrog.org/jira/browse/SFOS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved SFOS-1085. ---------------------------------- Fix Version/s: 3.17.x Resolution: Fixed done! test worksl all the way round! > extend the job submission test, list input and output directories and have it do real work > ------------------------------------------------------------------------------------------ > > Key: SFOS-1085 > URL: http://jira.smartfrog.org/jira/browse/SFOS-1085 > Project: SmartFrog > Issue Type: Task > Components: _service_hadoop > Affects Versions: 3.17.004 > Reporter: Steve Loughran > Assignee: Steve Loughran > Fix For: 3.17.x > > -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://jira.smartfrog.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |