Details
Description
When a query have more than 1 root stage, kill job with hadoop admin page, beeswax process will exit;
such this query
select a.* from (select distinct sn from test) a JOIN (select distinct sn from user) b on (a.sn = b.sn) ;
I think this is wrong way use hive or hive bug?
in hive Driver.java, execute()
} else { // TODO: This error messaging is not very informative. Fix that. errorMessage = "FAILED: Execution Error, return code " + exitVal + " from " + tsk.getClass().getName(); SQLState = "08S01"; console.printError(errorMessage); if (running.size() != 0) { taskCleanup(); // !!!!!here will call System.exit(9); } return 9; }
/** * Cleans up remaining tasks in case of failure */ public void taskCleanup() { // The currently existing Shutdown hooks will be automatically called, // killing the map-reduce processes. // The non MR processes will be killed as well. System.exit(9); }
I can simple extend Driver and overide taskCleanup method fix it. But the other map-reduce processes will not be kill.