Details
-
Type:
Bug
-
Status: Resolved
-
Priority:
Critical
-
Resolution: Fixed
-
Affects Version/s: 1.1.0
-
Fix Version/s: 1.2.0
-
Component/s: con.hive
-
Labels:None
-
Environment:
CentOS, Linux version 2.6.18-164.6.1.el5 (mockbuild@builder10.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Tue Nov 3 16:12:36 EST 2009
-
Target Version:
Description
- Global open files limit on machine : 1024
- HUE server running under user : hue
- OS Environment : CentOS, Linux version 2.6.18-164.6.1.el5 (mockbuild@builder10.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Tue Nov 3 16:12:36 EST 2009
- Error message in beeswax_server.out
10/11/23 10:07:57 WARN hdfs.DFSClient: Problem renewing lease for DFSClient_1629526434 java.io.IOException: Call to my.host.name/172.22.0.17:8020 failed on local exception: java.net.SocketException: Too many open files at org.apache.hadoop.ipc.Client.wrapException(Client.java:852) at org.apache.hadoop.ipc.Client.call(Client.java:820) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221) at $Proxy0.renewLease(Unknown Source) at sun.reflect.GeneratedMethodAccessor161.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy0.renewLease(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.renew(DFSClient.java:1050) at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1062) at java.lang.Thread.run(Thread.java:619) Caused by: java.net.SocketException: Too many open files at sun.nio.ch.Net.socket0(Native Method) at sun.nio.ch.Net.socket(Net.java:94) at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84) at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37) at java.nio.channels.SocketChannel.open(SocketChannel.java:105) at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:329) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:202) at org.apache.hadoop.ipc.Client.getConnection(Client.java:943) at org.apache.hadoop.ipc.Client.call(Client.java:788)
- Error message in runcpserver.log
[23/Nov/2010 09:58:20 +0000] middleware INFO Processing exception: Internal error processing get_tables: Traceback (most recent call last): File "/usr/share/hue/build/env/lib/python2.4/site-packages/Django-1.1.1-py2.4.egg/django/core/handlers/base.py", line 92, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/share/hue/apps/beeswax/src/beeswax/views.py", line 62, in show_tables tables = db_utils.meta_client().get_tables("default", ".*") File "/usr/share/hue/desktop/core/src/desktop/lib/thrift_util.py", line 190, in wrapper ret = res(*args, **kwargs) File "/usr/share/hue/desktop/core/src/desktop/lib/thrift_util.py", line 237, in wrapper ret = res(*args, **kwargs) File "/usr/share/hue/apps/beeswax/src/beeswax/../../gen-py/hive_metastore/ThriftHiveMetastore.py", line 627, in get_tables return self.recv_get_tables() File "/usr/share/hue/apps/beeswax/src/beeswax/../../gen-py/hive_metastore/ThriftHiveMetastore.py", line 644, in recv_get_tables raise x TApplicationException: Internal error processing get_tables