Details
-
Type:
Bug
-
Status: Resolved
-
Priority:
Critical
-
Resolution: Won't Fix
-
Affects Version/s: CDH3u4
-
Fix Version/s: None
-
Component/s: Hadoop Common
-
Labels:None
Description
repro:
1. have existing hadoop 0.20.2 cluster with data in hdfs
2. upgrade installation to cdh3u4
3. upgrade hdfs (because for some reason one has to do so) using the following code:
...
String[] args =
;
NameNode nameNode = NameNode.createNameNode(args, null);
UpgradeStatusReport report = null;
String status = null;
System.out.println("Upgrading Name Node to version -19..."); // (authorized)
while (true) {
report = nameNode.distributedUpgradeProgress(UpgradeAction.GET_STATUS);
status = report.getStatusText(true);
System.out.println(status); // (authorized)
if (status.contains("Upgrade for version -19 has been completed."))
Thread.sleep(3*1000L);
}
System.out.println("Finalizing upgrade..."); // (authorized)
nameNode.finalizeUpgrade();
System.out.println("Shutting down Name Node..."); // (authorized)
nameNode.stop();
nameNode.join();
System.out.println("Successfully upgraded Name Node."); // (authorized)
...
4. attempt to run a M/R job
5. see the following error:
INFO | jvm 1 | 2012/06/27 18:17:18 | org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=<userA>, access=EXECUTE, inode="/home/<redacted>/services/<redacted>/hadoop/mapred/system":<userB>:supergroup:drwx------
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
INFO | jvm 1 | 2012/06/27 18:17:18 | at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:860)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:558)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.mapred.TaskTracker.localizeJobTokenFile(TaskTracker.java:4529)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1321)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1262)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2602)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2566)
INFO | jvm 1 | 2012/06/27 18:17:18 | Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=<userA>, access=EXECUTE, inode="/home/<redacted>/services/<redacted>/hadoop/mapred/system":<userB>:supergroup:drwx------
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:203)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:159)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:125)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5207)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5186)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:1994)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:819)
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown Source)
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
INFO | jvm 1 | 2012/06/27 18:17:18 | at java.lang.reflect.Method.invoke(Method.java:597)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
INFO | jvm 1 | 2012/06/27 18:17:18 | at java.security.AccessController.doPrivileged(Native Method)
INFO | jvm 1 | 2012/06/27 18:17:18 | at javax.security.auth.Subject.doAs(Subject.java:396)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
INFO | jvm 1 | 2012/06/27 18:17:18 |
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.Client.call(Client.java:1107)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
INFO | jvm 1 | 2012/06/27 18:17:18 | at $Proxy7.getFileInfo(Unknown Source)
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
INFO | jvm 1 | 2012/06/27 18:17:18 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
INFO | jvm 1 | 2012/06/27 18:17:18 | at java.lang.reflect.Method.invoke(Method.java:597)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
INFO | jvm 1 | 2012/06/27 18:17:18 | at $Proxy7.getFileInfo(Unknown Source)
INFO | jvm 1 | 2012/06/27 18:17:18 | at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:858)
INFO | jvm 1 | 2012/06/27 18:17:18 | ... 6 more
INFO | jvm 1 | 2012/06/27 18:17:18 |
Diagnosis: the name node, job tracker, and secondary name node are running on one host as userA, while the task trackers and data nodes are running on separate hosts as userB. Perhaps this no longer works in cdh3u4 whereas it did in raw hadoop 0.20.2? Could you provide a patch, update, or workaround?