Uploaded image for project: 'CDH (READ-ONLY)'
  2. DISTRO-593

Hive create-index-with-deferred-rebuild action doesn't create index table storage with correct user and permission


    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: CDH4.3.0
    • Fix Version/s: None
    • Component/s: Hive
    • Labels:
    • Environment:
      CENTOS 6.1 with ORACLE JDK 7


      I create table T and its index idx_T_id with my own user ID such as 'me'. However, create-index-with-deferred-rebuild creates a directory under /user/hive/warehouse under user hive instead of my own user ID. When rebuilding index, I can't drop index table storage under /user/hive/warehouse due to its sticky bit setting.

      Sticky bit is set on the tables - the table and its index table when creating index with deferred rebuild.
      $ hadoop fs -ls /user/hive | grep warehouse
      drwxrwxrwt - hdfs supergroup /user/hive/warehouse
      $ hadoop fs -ls /user/hive/warehouse | grep ranking
      drwxrwxrwt - hive supergroup /user/hive/warehouse/default_ranking_idx_ranking_uid_
      drwxrwxrwt - hive supergroup /user/hive/warehouse/ranking

      I used own ID to rebuild and failed due to sticky bit setting on the index table storage.
      $ hive
      hive> show index on ranking;
      idx_ranking_uid ranking uid default_ranking_idx_ranking_uid_ compact
      Time taken: 1.595 seconds
      hive> alter index idx_ranking_uid on ranking rebuild;
      Loading data to table default.default_ranking_idx_ranking_uid_
      rmr: DEPRECATED: Please use 'rm -r' instead.
      rmr: Failed to move to trash: hdfs://myNN/user/hive/warehouse/default_ranking_idx_ranking_uid_. Consider using -skipTrash option
      Failed with exception Permission denied by sticky bit setting: user=ME, inode="/user/hive/warehouse/default_ranking_idx_ranking_uid_":hive:supergroup:drwxrwxrwt
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkStickyBit(FSPermissionChecker.java:245)
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:146)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:2816)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:2777)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2764)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:621)
      at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:408)
      at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44968)
      at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:415)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)

      FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
      MapReduce Jobs Launched:
      Job 0: Map: 3 Reduce: 1 Cumulative CPU: 5.45 sec HDFS Read: 2994 HDFS Write: 335 SUCCESS
      Total MapReduce CPU Time Spent: 5 seconds 450 msec

      My workaround:
      I sudo into user hive to drop the index table storage and then switch back to my own ID to rebuild.




            • Assignee:
              wayne2chicago Wayne
            • Votes:
              0 Vote for this issue
              0 Start watching this issue


              • Created: