博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
hadoop map端的超时参数
阅读量:4977 次
发布时间:2019-06-12

本文共 8922 字,大约阅读时间需要 29 分钟。

 

 目前集群上某台机器卡住导致出现大量的Map端任务FAIL,当定位到具体的机器上时,无法ssh或进去后terminal中无响应,退出的相关信息如下:

[hadoop@xxx ~]$ Received disconnect from xxx: Timeout, your session not responding.
 
任务执行失败的错误日志:
AttemptID:attempt_1413206225298_24177_m_000001_0 Timed out after 1200 secsContainer killed by the ApplicationMaster. Container killed on request. Exit code is 143
 
经过查找后1200s是配置的参数mapreduce.task.timeout,
关于参数mapreduce.task.timeout的解释:
The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string. A value of 0 disables the timeout.
 
通过翻hadoop2.2.0的源代码,类TaskHeartbeatHandler会作为一个独立的线程来运行。它会定期去检查当前所有运行的TaskAttempt,时间间隔为:mapreduce.task.timeout.check-interval-ms(默认30s),
Map.Entry
entry = iterator.next(); boolean taskTimedOut = (taskTimeOut > 0) && (currentTime > (entry.getValue().getLastProgress() + taskTimeOut)); if(taskTimedOut) { // task is lost, remove from the list and raise lost event iterator.remove(); eventHandler.handle(new TaskAttemptDiagnosticsUpdateEvent(entry .getKey(), "AttemptID:" + entry.getKey().toString() + " Timed out after " + taskTimeOut / 1000 + " secs")); eventHandler.handle(new TaskAttemptEvent(entry.getKey(), TaskAttemptEventType.TA_TIMED_OUT)); }
 
如果监测到有一个task_attempt没有在规定的时间间隔内(mapreduce.task.timeout)汇报进度,那么就认为该attempt已经失败,并发送一个TA_TIMED_OUT的Event,通知ApplicationMaster去Kill掉该Attempt。
Attempt的进度会定期报告给该线程,调用progressing方法:
  
public void progressing(TaskAttemptId attemptID) {  //only put for the registered attempts    //TODO throw an exception if the task isn't registered.    ReportTime time = runningAttempts.get(attemptID);    if(time != null) {      time.setLastProgress(clock.getTime());    }  }
 
 
在TaskAttemptListenerImpl类中会调用报告进度的方法,在任务的不同阶段,都会对任务向ApplicationMaster报告,提交进度信息。更详细的方法这里就不再深入研究。
 
一般情况下,我们的任务都是在运行过程中出现的这个错误,这就需要我们检查哪些资源的限制导致任务无法进行下去而出现这种问题。
在Cloudera中有一篇文章教你如何能够避免这个问题:

 Report progress

If your task reports no progress for 10 minutes (see the mapred.task.timeout property) then it will be killed by Hadoop. Most tasks don’t encounter this situation since they report progress implicitly by reading input and writing output. However, some jobs which don’t process records in this way may fall foul of this behavior and have their tasks killed. Simulations are a good example, since they do a lot of CPU-intensive processing in each map and typically only write the result at the end of the computation. They should be written in such a way as to report progress on a regular basis (more frequently than every 10 minutes). This may be achieved in a number of ways:

  • Call setStatus() on Reporter to set a human-readable description of
    the task’s progress
  • Call incrCounter() on Reporter to increment a user counter
  • Call progress() on Reporter to tell Hadoop that your task is still there (and making progress)

但是,事情还没完,集群中会不定时地有任务卡死在某个点上导致任务无法继续下去:

 

"main" prio=10 tid=0x000000000293f000 nid=0x1e06 runnable [0x0000000041b20000]   java.lang.Thread.State: RUNNABLEat sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:81)at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)- locked <0x00000006e243c3f0> (a sun.nio.ch.Util$2)- locked <0x00000006e243c3e0> (a java.util.Collections$UnmodifiableSet)- locked <0x00000006e243c1a0> (a sun.nio.ch.EPollSelectorImpl)at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
 

 

 

读源码分析这个问题,找到SocketIOWithTimeout类中的doIO方法,157行附近,
/now wait for socket to be ready.      int count = 0;      try {        count = selector.select(channel, ops, timeout);       } catch (IOException e) { //unexpected IOException.        closed = true;        throw e;      }      if (count == 0) {        throw new SocketTimeoutException(timeoutExceptionString(channel,                                                                timeout, ops));      }
 
当经过超时时间,但是却并没有读出任何数据时,抛出错误:
 
Error: java.net.SocketTimeoutException: 70000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=xxx remote=/xxx]    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)    at java.io.FilterInputStream.read(FilterInputStream.java:83)    at java.io.FilterInputStream.read(FilterInputStream.java:83)    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1490)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:962)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:930)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
 
超时时间通过(dfs.client.socket-timeout)来计算,如果在该时间范围内,没有读到任何的数据,那么就抛出这个异常。
进入SocketIOTimeout.select方法,发现其中会执行一段轮询:
while (true) {          long start = (timeout == 0) ? 0 : Time.now();          key = channel.register(info.selector, ops);          ret = info.selector.select(timeout);                   if (ret != 0) {            return ret;          }                   /* Sometimes select() returns 0 much before timeout for           * unknown reasons. So select again if required.           */          if (timeout > 0) {            timeout -= Time.now() - start;            if (timeout <= 0) {              return 0;            }          }                   if (Thread.currentThread().isInterrupted()) {            throw new InterruptedIOException("Interruped while waiting for " +                                             "IO on channel " + channel +                                             ". " + timeout +                                             " millis timeout left.");          }        }
 
此时由于是读数据,ops一般就是指SelectionKey.OP_READ,我们设置的timeout不等于0,也就是说会执行一段总时间为timeout的任务,”Sometimes select() returns 0 much before timeout for  * unknown reasons. So select again if required.” 这个注释写的有点含糊,看来NIO有些问题当前都没确定清楚。
 
我们看一下方法的介绍:
java.nio.channels.Selectorpublic abstract int select(long timeout)                   throws java.io.IOExceptionSelects a set of keys whose corresponding channels are ready for I/O operations.This method performs a blocking selection operation. It returns only after at least one channel is selected, this selector's wakeup method is invoked, the current thread is interrupted, or the given timeout period expires, whichever comes first.
 
Selector选择的方法,仅当下面三个事件之一发生的情况下:
  • 至少一个已经注册的Channel被选择,返回的就是被选择的Channel数量;
  • Selector被中断;
  • 给定的超时时间已到;

 

如果被中断了,会抛出中断异常,因此当前仅可能是超时时间已到,返回的ret=0,导致抛出上述的异常。

但是,这也没完,难道超时了不会重试?到底会重试几次?

 

经过继续分析,发现往下的堆栈中的DFSInputStream调用了readBuffer方法,可以看到retryCurrentNode在第一次失败后,将IOException捕获,会进行必要的重试操作,如果还是发生超时,并且找不到就将其加入黑名单作为失败的DataNode(可能下次不会进行重试?),并转移到另外的DataNode上(执行seekToNewSource方法),经过几次后才会将IOException真正抛出。

 
try {        return reader.doRead(blockReader, off, len, readStatistics);      } catch ( ChecksumException ce ) {        DFSClient.LOG.warn("Found Checksum error for "            + getCurrentBlock() + " from " + currentNode            + " at " + ce.getPos());               ioe = ce;        retryCurrentNode = false;        // we want to remember which block replicas we have tried        addIntoCorruptedBlockMap(getCurrentBlock(), currentNode,            corruptedBlockMap);      } catch ( IOException e ) {        if (!retryCurrentNode) {          DFSClient.LOG.warn("Exception while reading from "              + getCurrentBlock() + " of " + src + " from "              + currentNode, e);        }        ioe = e;      }      boolean sourceFound = false;      if (retryCurrentNode) {        /* possibly retry the same node so that transient errors don't         * result in application level failures (e.g. Datanode could have         * closed the connection because the client is idle for too long).         */        sourceFound = seekToBlockSource(pos);      } else {        addToDeadNodes(currentNode);        sourceFound = seekToNewSource(pos);      }      if (!sourceFound) {        throw ioe;      }      retryCurrentNode = false;    }
 
总之,这部分的问题还是很多,继续研究中...

转载于:https://www.cnblogs.com/mmaa/p/5789901.html

你可能感兴趣的文章
C++中string的访问方式
查看>>
CCTMXXMLParser
查看>>
python 递归
查看>>
sqlalchemy的join使用
查看>>
设计模式-生产者消费者模式
查看>>
Java多线程(二)
查看>>
发个招聘信息
查看>>
如何在terminal中使用gcc编译objective-c
查看>>
个人博客
查看>>
VS部署中的ProductCode问题
查看>>
linux 下C++查询mysql数据库
查看>>
ASM 1——概念简介
查看>>
HDU 1754 I Hate It(线段树区间求最值)
查看>>
SQL 理论知识总结
查看>>
Effective java笔记3--类和接口2
查看>>
properties 的生成与解析配置文件
查看>>
零基础小白怎么用Python做表格?
查看>>
oracle 按照时间间隔进行分组
查看>>
doGet & doPost
查看>>
jQuery动画
查看>>