Details
-
Type: Bug
-
Status: Open
-
Priority: Major
-
Resolution: Unresolved
-
Affects Version/s: 1.0.0
-
Fix Version/s: None
-
Component/s: Examples
-
Labels:None
-
Environment:5.2 VM
Description
This description and the code that follows are obsolete.
"In order to access our Hive-backed datasets from remote Spark tasks, we need to register some JARs in Spark's equivalent of the Hadoop DistributedCache:
addJarFromClass(sparkContext, getClass());
addJars(sparkContext, System.getenv("HIVE_HOME"), "lib");
sparkContext.addFile(System.getenv("HIVE_HOME")+"/conf/hive-site.xml");"