Details
Description
HTTPFS wants to create a hadoop-httpfs directory in /var/run, but since that directory is owned by root and the permissions are rwxr-xr-x, it can't.
This prevents the startup of the httpfs daemons.
I tried to create that directory manually as root, on all the nodes where httpfs runs:
- mkdir -p /var/run/hadoop-httpfs/
- chown httpfs:httpfs /var/run/hadoop-httpfs/
This allows me to start the daemons, but only until the next reboot of the nodes, because the /var/run directory is stored on a tmpfs on Ubuntu, so the content of that directory is lost at each boot.
Even worse, the Cloudera Manager doesn't seem to be aware that the service is not functional, and reports it as running properly.
However, Hue reports a potential misconfiguration:
Configuration files located in /etc/hue
Potential misconfiguration detected. Please fix and restart HUE.
hadoop.hdfs_clusters.default.webhdfs_url Current value: http://cloudera-node-02.mike.ro:14000/webhdfs/v1
Failed to access filesystem root
and the Hue file browser doesn't work (and the message is misleading, the problem has nothing to do with being a HDFS superuser, it doesn't work even if I'm logged in as hdfs):
Cannot access: /. Note: you are a Hue admin but not a HDFS superuser (which is "hdfs").
[Errno 2] File / not found
And the error log of the HTTPFS daemon showing the issue:
+ exec /usr/lib/hadoop-httpfs/sbin/httpfs.sh run
mkdir: cannot create directory `/var/run/hadoop-httpfs/': Permission denied
Jan 7, 2013 7:15:53 PM org.apache.catalina.startup.Embedded initDirs
SEVERE: Cannot find specified temporary folder at /var/run/hadoop-httpfs/
Jan 7, 2013 7:15:54 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/j2sdk1.6-oracle/jre/lib/amd64/server:/usr/lib/jvm/j2sdk1.6-oracle/jre/lib/amd64:/usr/lib/jvm/j2sdk1.6-oracle/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Jan 7, 2013 7:15:54 PM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-14000
Jan 7, 2013 7:15:54 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 1227 ms
Jan 7, 2013 7:15:54 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Jan 7, 2013 7:15:54 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.35
Jan 7, 2013 7:15:54 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory ROOT
Jan 7, 2013 7:15:55 PM org.apache.jasper.EmbeddedServletOptions <init>
SEVERE: The scratchDir you specified: /usr/lib/hadoop-httpfs/work/Catalina/localhost/_ is unusable.
Jan 7, 2013 7:15:55 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory webhdfs
Jan 7, 2013 7:15:55 PM org.apache.catalina.core.StandardContext start
SEVERE: Error listenerStart
Jan 7, 2013 7:15:55 PM org.apache.catalina.core.StandardContext start
SEVERE: Context [/webhdfs] startup failed due to previous errors
Jan 7, 2013 7:15:55 PM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-14000
Jan 7, 2013 7:15:55 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 967 ms
I also attached the inspector output for my cluster.