Hadoop Error Initializing Attempt Enoent No Such File Or Directory
here for exception in thread "main" enoent: no such file or directory a quick overview of the site Help Center hadoop troubleshooting Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up hadoop cluster - wrong permissions after upgrade up vote 2 down vote favorite After upgrading my cluster to hadoop-1.0.0 it became impossible to run any jobs successfuly. I'm getting the following error in my jobtracker logs: 2012-01-31 10:25:13,558 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201201310126_0002_m_000005_0: Error initializing attempt_201201310126_0002_m_000005_0: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:692) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:647) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:239) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:196) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1209) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1184) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1099) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2382) at java.lang.Thread.run(Thread.java:662) 2012-01-31 10:25:13,558 ERROR org.apache.hadoop.mapred.TaskStatus: Trying to set finish time for task attempt_201201310126_0002_m_000005_0 when no start time is set, stackTrace is : java.lang.Exception at org.apache.hadoop.mapred.TaskStatus.setFinishTime(TaskStatus.java:145) at org.apache.hadoop.mapred.TaskInProgress.incompleteSubTask(TaskInProgress.java:670) at org.apache.hadoop.mapred.JobInProgress.f
Task Id : attempt_201203212250_0001_m_000002_1, Status : FAILED [exec] Error initializing attempt_201203212250_0001_m_000002_1: [exec] ENOENT: No such file or directory [exec] at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) [exec] at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:521) [exec] at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) [exec] at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) [exec] at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:216) [exec] at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1352) [exec] http://stackoverflow.com/questions/9075870/hadoop-cluster-wrong-permissions-after-upgrade at java.security.AccessController.doPrivileged(Native Method) [exec] at javax.security.auth.Subject.doAs(Subject.java:416) [exec] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) [exec] at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1327) [exec] at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1242) [exec] at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2541) [exec] at org.apac It wasn’t immediately apparent to me what file wasn’t found from the error messages so I checked the http://blog.timmattison.com/archives/2012/03/21/tip-handle-failed-tasks-throwing-enoent-errors-in-hadoop logs, the JobTracker, my code, ran some known good jobs that also failed, basically everything I could think of. It turns out that due to me accidentally running a script as “root” (don’t worry, it was only on my desktop) that the permissions of several files in the hdfs user’s home directory had changed ownership to “root”. Because of that Hadoop was unable to create files in the /usr/lib/hadoop-0.20 directory. NOTE: These steps assume you are using Hadoop 0.20. Adjust the paths in the commands accordingly if you aren’t. If you want a quick fix try these steps (only if you take full responsibility for anything that may go wrong): Stop Hadoop using the stop-all.sh sc
No such file or http://surachartopun.com/2013/01/learn-fatal-mapredjobtracker-enoent-no.html directory Nothing much for this post. I just https://qnalist.com/questions/5372999/enoent-no-such-file-or-directory-job-failing-after-upgrade-to-cdh5 needed to keep information in my blog. After I installed hadoop (rpm). I had started namenode & datanode, bit had an issue about jobtracker. When I started it. It showed error - FATAL no such mapred.JobTracker: ENOENT: No such file or directory. -bash-4.1$ hadoop jobtracker &[1] 14455-bash-4.1$ 13/01/04 20:07:30 INFO mapred.JobTracker: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting JobTrackerSTARTUP_MSG: host = centos/192.168.111.80STARTUP_MSG: args = []STARTUP_MSG: version = 1.1.1STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:51:29 UTC no such file 2012************************************************************/13/01/04 20:07:30 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties13/01/04 20:07:30 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.13/01/04 20:07:30 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).13/01/04 20:07:30 INFO impl.MetricsSystemImpl: JobTracker metrics system started13/01/04 20:07:30 INFO impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.13/01/04 20:07:31 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.13/01/04 20:07:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens13/01/04 20:07:31 INFO mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)13/01/04 20:07:31 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list13/01/04 20:07:31 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)13/01/04 20:07:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens13/01/04 20:07:31 INFO mapred.JobTracker: Starting jobtracker with owner as hdfs13/01/04 20:07:31 INFO impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.13/01/04 20:07:31 INFO ipc.Server
Hi Folks, I am trying to run a simple pi job after a upgrade from cdh4 to cdh5, but failing. I checked the permissions and it looks good to me.Trace:-bash-4.1$ hadoop jar hadoop-mapreduce-examples-2.4.0.jar pi 16 100000Number of Maps = 16Samples per Map = 100000Wrote input for Map #0Wrote input for Map #1Wrote input for Map #2Wrote input for Map #3Wrote input for Map #4Wrote input for Map #5Wrote input for Map #6Wrote input for Map #7Wrote input for Map #8Wrote input for Map #9Wrote input for Map #10Wrote input for Map #11Wrote input for Map #12Wrote input for Map #13Wrote input for Map #14Wrote input for Map #15Starting Job14/10/28 14:16:46 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id14/10/28 14:16:46 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId14/10/28 14:16:46 INFO Configuration.deprecation: slave.host.name is deprecated. Instead, use mapreduce.tasktracker.host.name14/10/28 14:16:47 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/user/jsundi277005082/.staging/job_local277005082_0001ENOENT: No such file or directoryat org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:479)at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:599)at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:179)at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1313)at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at org.apache.hadoop.examples