上传文件到hdfs:
hadoop fs -put README.txt /
报错:
WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /README.txt.COPYING could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
put: File /README.txt.COPYING could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
查看从节点的日志:
cd $HADOOP_HOME
cd logs
tail -f hadoop-root-datanode-slave1.log
报错:
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to master/192.168.204.130:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to master
已解决[重新格式化集群]
1、关闭集群【只在master上操作】2、rm -rf /usr/local/src/hadoop-2.6.1/dfs/name/*【每个节点操作】3、rm -rf /usr/local/src/hadoop-2.6.1/dfs/data/*【每个节点操作】4、rm -rf /usr/local/src/hadoop-2.6.1/tmp/*【每个节点操作】5、rm -rf /usr/local/src/hadoop-2.6.1/logs/*【每个节点操作】6、重新格式化:hadoop namenode -format【只在master上操作】7、启动集群【只在master上操作】