sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juan Martin Pampliega <jpampli...@gmail.com>
Subject Passing mapreduce options when executing saved job
Date Thu, 19 Mar 2015 15:12:18 GMT
Hi,

I am executing a saved job int the following way

sqoop job \
  -D mapreduce.task.timeout=0 \
  -D mapreduce.map.maxattempts=8 \
  --exec ${JOB_NAME} \
  --meta-connect ${SQOOP_METASTORE} \
  -- --hive-partition-value "${HIVE_PARTITION}"

The job starts ok, but it does not apply the supplied mapreduce options.

For example, the map tasks fail with a timeout of 600 secs which is the
Hadoop default rather than having no timeout as the option
mapreduce.task.timeout=0 implies.

I have executed this job as a direct import without saving it and it
applies the options correctly and finishes fine. The problem is I need to
do an incremental job by the table's PK so I need to save the job in the
metastore.

Any ideas on how to fix this?

Cheers,
Juan.

Mime
View raw message