Spark 2.0 brings in new entry point SparkSession to replace SQLContext or HiveContext. In the current Livy code, interpreters already support SparkSession, while Job API still uses old entry point. So it would be better to have Job API also supports new Spark 2.0 entry point.
The main obstacle is how to bring into SparkSession, since it is a Spark 2.0 only class, Job API uses explicit type which makes it hard to use reflection.
Another thing is to support SparkSession in JobContextImpl.
So here as far as I can think there're possibly two solutions:
- One solution is to build separate jars for spark 1.x and 2.x and pick right jars according to Spark version detected in the runtime.
- Another solution is to use Object for SparkSession, which means user has to do type conversion manually.
Both these two solutions are not so elegant, it would be great to hear suggestions and comments on how to handle this issue in Job API.