org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs:

2010-08-23  来源:本站原创  分类:Industry  人气:230 

org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs: / / localhost: 9000/user/root/input

2, Input path does not exist exception

(1) Exception Description

Hadoop in the current directory when you create an input directory below and cp some files into it, started:

[Root @ localhost hadoop-0.19.0] # bin / hadoop namenode-format

[Root @ localhost hadoop-0.19.0] # bin / start-all.sh

At this time, do you think input already exists, should be able to implement wordcount task: the

[Root @ localhost hadoop-0.19.0] # bin / hadoop jar hadoop-0.19.0-examples.jar wordcount input output

The results throw a bunch of exceptions, the information is as follows:

org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs: / / localhost: 9000/user/root/input
at org.apache.hadoop.mapred.FileInputFormat.listStatus (FileInputFormat.java: 179)
at org.apache.hadoop.mapred.FileInputFormat.getSplits (FileInputFormat.java: 190)
at org.apache.hadoop.mapred.JobClient.submitJob (JobClient.java: 782)
at org.apache.hadoop.mapred.JobClient.runJob (JobClient.java: 1127)
at org.apache.hadoop.examples.WordCount.run (WordCount.java: 149)
at org.apache.hadoop.util.ToolRunner.run (ToolRunner.java: 65)
at org.apache.hadoop.examples.WordCount.main (WordCount.java: 155)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java: 39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java: 25)
at java.lang.reflect.Method.invoke (Method.java: 597)
at org.apache.hadoop.util.ProgramDriver $ ProgramDescription.invoke (ProgramDriver.java: 68)
at org.apache.hadoop.util.ProgramDriver.driver (ProgramDriver.java: 141)
at org.apache.hadoop.examples.ExampleDriver.main (ExampleDriver.java: 61)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java: 39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java: 25)
at java.lang.reflect.Method.invoke (Method.java: 597)
at org.apache.hadoop.util.RunJar.main (RunJar.java: 165)
at org.apache.hadoop.mapred.JobShell.run (JobShell.java: 54)
at org.apache.hadoop.util.ToolRunner.run (ToolRunner.java: 65)
at org.apache.hadoop.util.ToolRunner.run (ToolRunner.java: 79)
at org.apache.hadoop.mapred.JobShell.main (JobShell.java: 68)

The exception, I simulated the process is:

[Root @ localhost hadoop-0.19.0] # bin / hadoop fs-rmr input
Deleted hdfs: / / localhost: 9000/user/root/input

[Root @ localhost hadoop-0.19.0] # bin / hadoop fs-rmr output
Deleted hdfs: / / localhost: 9000/user/root/output

Because before that I have successfully implemented once.

(2) Abnormal

Should Needless to say, are that local input directory and not uploaded to the HDFS, on arising org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs: / / localhost: 9000/user/root / input

In my mind, like when using the hadoop-0.16.4, as long as the input directory exists, is not implementation of the upload command, you can run the later version is not acceptable.

Only need to execute the command can be uploaded:

[Root @ localhost hadoop-0.19.0] # bin / hadoop fs-put input / input

相关文章
  • org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs: 2010-08-23

    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs: / / localhost: 9000/user/root/input 2, Input path does not exist exception (1) Exception Description Hadoop in the current directory when you create an input directory b

  • Exception in thread "main" org.apache.hadoop.mapred.InvalidInputExnutch new discovery for the future Memo 2010-06-16

    urls-dir mycrawl-depth 3-topN 10-threads 1 is correct urls-dir mycrawl20-depth 3-topN 10-threads 1 is wrong Index directory can not have figures specifically why I still do not view the source. But at last he found a reason ok. Vertical search exchan

  • hadoop / mapred optimization. V002 2010-03-26

    Transfer from http://thethethethethethe.spaces.live.com/blog/cns!A001241972EA08EA!232.entry Since the V001 has received a number of friends, reading, and to share. There V002 draft, but based on the revision in the V001 and a slight increase in the c

  • With Linux and Apache Hadoop for cloud computing 2010-10-31

    Original http://www.ibm.com/developerworks/cn/aix/library/au-cloud_apache/ About cloud computing Cloud computing has recently become increasingly popular, and cloud computing has been seen as a new trend in IT industry. Cloud computing can be loosely

  • ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NumberFormatExc 2011-04-19

    namenode baffling not start to see log: 2011-04-19 12:06:59,967 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 11471 2011-04-19 12:07:00,592 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction =

  • ERROR: org.apache.hadoop.hbase.MasterNotRunnin... 2012-09-01

    ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times 这是为什么呢,Master为什么没有启动起来呢? 查看logs目录下的Master日志,发现有以下信息: 2012-02-01 14:41:52,867 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. org.apache.ha

  • Apache Hadoop 的最佳实践和反模式 2013-01-16

    本文另一地址请见Apache Hadoop 的最佳实践和反模式 本文译自 Apache Hadoop: Best Practices and Anti-Patterns @AlfredCheung 同学亦对此文有贡献 Apache Hadoop是一个用来构建大规模共享存储和计算设施的软件.Hadoop集群已经应用在多种研究和开发项目中,并且,Yahoo!, EBay, Facebook, LinkedIn, Twitter等公司,越来越多的的把它应用在生产环境中. 这些已有的经验是技术和投入的结

  • Apache Hadoop 2.4.1 命令参考 2014-09-20

    概述 所有的Hadoop命令都是通过bin/hadoop目录下的脚本所执行,在没有任何参数的情况下运行Hadoop脚本将打印该命令描述. Usage:Hadoop [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS] hadoop有个输入选项解析框架可以利用在运行class的时候来解析参数. COMMAND_OPTION Description --config confdir 包含所有的配置目录,默认目录是$HADOO

  • Apache Hadoop 2.6.0安装部署 2015-04-02

    注:本文档参考官方文档编写,原文链接:http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/SingleCluster.html http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html 1.硬件环境 共有 3 台机器,均使用的 linux 系统,Java 使用的是 jdk8.0. 配置如下

  • hbase of org.apache.hadoop.hbase.client.RetriesExhaustedException: Error Records 2010-07-16

    Start hbase today, all the shell commands will appear org.apache.hadoop.hbase.client.RetriesExhaustedException exception, web interface is also not open, check a lot of information online have been resolved. Finally, run the start-hbase.sh generated

  • 使用Apache Hadoop.Impala和MySQL进行数据分析 2015-01-14

    Apache Hadoop是目前被大家广泛使用的数据分析平台,它可靠.高效.可伸缩.Percona公司的Alexander Rubin最近发表了一篇博客文章介绍了他是如何将一个表从MySQL导出到Hadoop然后将数据加载到Cloudera Impala并在这上面运行报告的. 在Alexander Rubin的这个测试示例中他使用的集群包含6个数据节点.下面是具体的规格: 用途 服务器规格 NameNode.DataNode.Hive 元数据存储等 2x PowerEdge 2950, 2x L

  • An update on Apache Hadoop 1.0 2012-02-28

    Some users & customers have asked about the most recent release of Apache Hadoop, v1.0: what's in it, what it followed and what it preceded. To explain this we should start with some basics of how Apache projects release software: By and large, in Ap

  • Unable to instantiate org.apache.hadoop.hive.metas 2014-03-26

    hive启动后运行命令时出现: FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 这种情况一般原因比较多,所以需要进行hiv

  • org.apache.hadoop.mapreduce.JobContext 2014-05-13

    在java中调用sqoop接口进行mysql和hdfs直接数据传输时,遇到以下错误: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected 这里需要注意,sqoop有两个版本: sqoop-1.4.4.bin__hadoop-1.0.0.tar.gz(对应hadoop1版本) sqoop-1.4.4.bin__hadoop-2.0.4-alpha.tar.gz(对应hadoop2版本) 出现上

  • one page has two form:input path="relatedArtic 2014-08-29

    问题描述: 在一个页面中定义了两个path值相同的<form:input>,但是在添加了Spring MVC数据验证的时候出现了会将这两个值设置为"'",不知道是什么情况? 主要代码如下: <form:input path="relatedArticleIds" class="form-control" /> <form:input path="relatedArticleIds" class=&

  • hive报Unable to instantiate org.apache.hadoop.hive. 2014-10-30

    Logging initialized using configuration in jar:file:/usr/share/hive/lib/hive-common-0.13.1.jar!/hive-log4j.properties Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive

  • Hadoop URL中读取数据出错:unknown protocol: hdfs 2014-10-25

    今天在学习如何从hadoop中读取数据时,写了一个简单的方法,测试时,却报以下错误: 以下是读取hadoop中文件并写入本地磁盘的代码: package hdfs; import java.io.BufferedReader; import java.io.FileWriter; import java.io.InputStream; import java.io.InputStreamReader; import java.net.URL; import org.apache.hadoop.i

  • Custom hadoop map / reduce input file cutting InputFormat 2011-07-01

    hadoop file will cut the original input file, then split each incoming mapper program for processing, FileInputFormat all the files as a data source InputFormat implementation base class, FileInputFormat save all files as the job input and achieve sp

  • Apache Hadoop 2.4.1完全分布式集群安装 2014-09-16

    目的 这篇文档描述了从少量到数以千计节点有价值的Hadoop集群的安装,配置和管理.如果你想玩Hadoop,你也许想知道在一个单机上的安装方法.(请参考:Single Node Setup) 安装前准备 从Apache mirrors中下载一个稳定的Hadoop版本 安装 Hadoop集群典型的安装是在集群上的所有机器解压这个软件或者安装RPMS.典型的集群是一个机器被设计为NameNode,另外一个机器被设计为ResourceManager,这些都是Masters节点.集群中剩下的机子都扮演着

  • Apache Hadoop 2.4.1 单节点安装 2014-09-14

    一.目的 这篇文档描述了怎样去安装和配置一个单节点的Hadoop,因此您可以使用Hadoop MapReduce 和 Hadoop Distributed File System (HDFS) 快速展现一个简单的运算. 二.安装的先决条件 支持的平台 Hadoop支持GNU/Linux系统,并被作为开发和产品平台.经证实,在GNU/Linux平台上Hadoop可以支持2000个节点的集群. Windows系统也是被支持的,但是下面的文档仅描述Hadoop在Linux上的安装,Hadoop在Win