hadoop错误之ClassNotFoundException
hadoop开发环境:window上eclipse+虚拟机的ubuntu13.04+hadoop-1.1.2+JDK1.7
今天在eclipse上(eclipse在window上)运行hadoop example的worldcount程序出现:
WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
INFO mapred.JobClient: Task Id : attempt_201312271733_0015_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.kai.hadoop.WordCount$TokenizerMapper
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
WARN snappy.LoadSnappy: Snappy native library not loaded
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:849)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: com.kai.hadoop.WordCount$TokenizerMapper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:802)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
... 8 more
这个程序直接ubuntu下是可以直接运行的,找了下网上的解决方案:
(1)大部分是说加job.setJarByClass(WordCount.class);但是这个在程序里有设置,所以不是自己想要的
(2)还有一种方法是说org.apache.hadoop.fs里面的一个类名为FileUtl.class中的一个方法checkReturnValue搞的鬼,要修改源码,将这个注释掉,然后重新打成hadoop-core-1.1.2.jar包,但是也没有用
(3)http://blog.csdn.net/zklth/article/details/5816435是受这篇帖子的启发,因为自己hadoop也是个伪集群,打成jar在主节点下运行时可以
不知道这样理解对不对,现在也只能是在window下写好代码,然后打成jar包再上传到linux运行,难道真的要这样?这样岂不是太麻烦,希望有知道的大神指点下(但愿不是让我换到linux下做开发)