Uploaded image for project: 'CDH (READ-ONLY)'
  1. CDH (READ-ONLY)
  2. DISTRO-219

Investigate service scripts on SLES11 with secure configuration

    Details

      Description

      From Matthew Goeke:

      First thank you for the replies as they helped us catch a couple
      issues.

      So far it looks like it is finding jsvc correctly but inside of
      $HADOOP_HOME/bin/hadoop script we are not able to get it to set the
      $_HADOOP_RUN_MODE to be jsvc (everything we have tried routes to
      normal). We temporarily altered the init.d script to not change the
      user variable to hdfs and to leave as root and the datanode started
      properly but obviously it was running as root and not hdfs as we would
      like. When we took a closer look at $HADOOP_HOME/bin/hadoop script it
      looks like the only way to get it to set the run mode to jsvc is for
      the EUID to be root but yet this will never happen if the init.d runs
      before it. Are we missing something?

      Matt

      On Apr 22, 1:21 pm, Bruno Mahé <br...@cloudera.com> wrote:
      > > Hi Matt,
      > >
      > > Just in case:
      > >
      > > |have you edited /etc/default/hadoop-0.20 to reflect your changes?
      > >
      > > Thanks,
      > > Bruno
      > > |
      > >
      > > On 04/22/2011 09:51 AM, Aaron T. Myers wrote:
      > >
      > >
      > >
      > >
      > >
      > >
      > >
      >> > > Hey Matt,
      > >
      >> > > Most likely thing is probably that the $HADOOP_HOME/bin/hadoop script
      >> > > can't find jsvc because of the way you've moved things around. If that
      >> > > script can't find jsvc at the path it expects, it will try to start
      >> > > the DN without using jsvc and you'll see that error you pasted below.
      >> > > The script assumes that jsvc is at
      >> > > "_JSVC_PATH=${HADOOP_HOME}/sbin/${JAVA_PLATFORM}/jsvc".
      > >
      >> > > –
      >> > > Aaron T. Myers
      >> > > Software Engineer, Cloudera
      > >
      >> > > On Fri, Apr 22, 2011 at 8:58 AM, Matthew Goeke <mmg...@monsanto.com
      >> > > <mmg...@monsanto.com>> wrote:
      > >
      >> > > We are using the cdh3 stable rpm�s on SLES 11sp1 and have moved the
      >> > > cloudera files from their default location in /usr/lib/* to our custom
      >> > > locations. We have the cluster working fine in non-secure mode. When
      >> > > we try to setup Kerberos security using Active Directory, we can start
      >> > > the namenode process and a datanode process if we start the datanode
      >> > > process from commandline as root (ie �$ HADOOP_DATANODE_USER=hdfs sudo
      >> > > -E hadoop datanode�). However, when we try to start the datanode
      >> > > process using the cloudera init.d scripts, we get the following
      >> > > error:
      > >
      >> > > --------------------------------------------------------------------------- --------------------------------------------------------------------------- ------------------
      > >
      >> > > 2011-04-21 15:52:35,213 INFO
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
      >> > > /************************************************************
      >> > > STARTUP_MSG: Starting DataNode
      >> > > STARTUP_MSG: host = mynode.fq.dn/10.30.xxx.xx
      >> > > STARTUP_MSG: args = []
      >> > > STARTUP_MSG: version = 0.20.2-cdh3u0
      >> > > STARTUP_MSG: build = -r 81256ad0f2e4ab2bd34b04f53d25a6c23686dd14;
      >> > > compiled by 'hudson' on Fri Mar 25 20:19:33 PDT 2011
      >> > > ************************************************************/
      >> > > 2011-04-21 15:52:35,729 INFO
      >> > > org.apache.hadoop.security.UserGroupInformation: JAAS Configuration
      >> > > already set up for Hadoop, not re-installing.
      >> > > 2011-04-21 15:52:36,520 INFO
      >> > > org.apache.hadoop.security.UserGroupInformation: Login successful for
      >> > > user hdfs/mynode.fq...@MY.DOMAIN using keytab file /hadoop/keytabs/
      >> > > hdfs.keytab
      >> > > 2011-04-21 15:52:36,521 ERROR
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode:
      >> > > java.lang.RuntimeException: Cannot start secure cluster without
      >> > > privileged resources. In a secure cluster, the DataNode must be
      >> > > started from within jsvc. If using Cloudera packages, please install
      >> > > the hadoop-0.20-sbin package.
      > >
      >> > > For development purposes ONLY you may override this check by setting
      >> > > dfs.datanode.require.secure.ports to false. *** THIS WILL OPEN A
      >> > > SECURITY HOLE AND MUST NOT BE USED FOR A REAL CLUSTER ***.
      >> > > at
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java :
      >> > > 306)
      >> > > at
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:
      >> > > 280)
      >> > > at
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:
      >> > > 1533)
      >> > > at
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNod e.java:
      >> > > 1473)
      >> > > at
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.jav a:
      >> > > 1491)
      >> > > at
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:
      >> > > 1616)
      >> > > at
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:
      >> > > 1626)
      > >
      >> > > 2011-04-21 15:52:36,522 INFO
      >> > > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
      >> > > /************************************************************
      >> > > SHUTDOWN_MSG: Shutting down DataNode at mynode.fq.dn/10.30.xxx.xx
      > >
      >> > > --------------------------------------------------------------------------- --------------------------------------------------------------------------- ------------------
      > >
      >> > > Another interesting point is that if we move the sbin directory back
      >> > > to the default location we actually see more Kerberos debugging output
      >> > > then in the custom location. We have tried creating a symbolic link
      >> > > at /usr/lib/hadoop pointing to the new location. If anyone has
      >> > > suggestions around potential workarounds please let me know.
      > >
      >> > > Thanks,
      >> > > Matt

        Attachments

          Activity

            People

            • Assignee:
              rvs Roman V Shaposhnik
              Reporter:
              bruno Bruno Mahé
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: