mesos上运行spark任务提交总是失败 请问是什么原因?
I120708:38:23.54658320134fetcher.cpp:414]FetcherInfo:{"cache_directory":"\/tmp\/mesos...
I1207 08:38:23.546583 20134 fetcher.cpp:414] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/a44359f1-5311-4f7f-bc67-167d52972057-S7\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/10.0.0.103:9000\/app\/spark-1.5.2-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/tmp\/mesos\/slaves\/a44359f1-5311-4f7f-bc67-167d52972057-S7\/frameworks\/b5fbb3e7-80a5-4666-a9fa-86b39ffe487b-0006\/executors\/a44359f1-5311-4f7f-bc67-167d52972057-S7\/runs\/633f82fa-0d75-4136-be28-1c2e7f5406f2","user":"root"}
I1207 08:38:23.548233 20134 fetcher.cpp:369] Fetching URI 'hdfs://10.0.0.103:9000/app/spark-1.5.2-bin-hadoop2.6.tgz'
I1207 08:38:23.548249 20134 fetcher.cpp:243] Fetching directly into the sandbox directory
I1207 08:38:23.548264 20134 fetcher.cpp:180] Fetching URI 'hdfs://10.0.0.103:9000/app/spark-1.5.2-bin-hadoop2.6.tgz'
E1207 08:38:23.551007 20134 shell.hpp:90] Command 'hadoop version 2>&1' failed; this is the output:
sh: hadoop: command not found
Failed to fetch 'hdfs://10.0.0.103:9000/app/spark-1.5.2-bin-hadoop2.6.tgz': Skipping fetch with Hadoop client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127
Failed to synchronize with slave (it's probably exited)
==================================
15/12/07 08:38:23 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 10.0.0.103): UnknownReason
15/12/07 08:38:23 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 10.0.0.172): UnknownReason 展开
I1207 08:38:23.548233 20134 fetcher.cpp:369] Fetching URI 'hdfs://10.0.0.103:9000/app/spark-1.5.2-bin-hadoop2.6.tgz'
I1207 08:38:23.548249 20134 fetcher.cpp:243] Fetching directly into the sandbox directory
I1207 08:38:23.548264 20134 fetcher.cpp:180] Fetching URI 'hdfs://10.0.0.103:9000/app/spark-1.5.2-bin-hadoop2.6.tgz'
E1207 08:38:23.551007 20134 shell.hpp:90] Command 'hadoop version 2>&1' failed; this is the output:
sh: hadoop: command not found
Failed to fetch 'hdfs://10.0.0.103:9000/app/spark-1.5.2-bin-hadoop2.6.tgz': Skipping fetch with Hadoop client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127
Failed to synchronize with slave (it's probably exited)
==================================
15/12/07 08:38:23 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 10.0.0.103): UnknownReason
15/12/07 08:38:23 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 10.0.0.172): UnknownReason 展开
1个回答
推荐律师服务:
若未解决您的问题,请您详细描述您的问题,通过百度律临进行免费专业咨询