When your HDFS Node Manager does not start for no apparent reason (such as following one), solution is simple: check free space on your disk. Node Manager gets nervous when there is less than 10 % of free disk space and may not work properly or at all.
1 2 3 4 5 6 7 8 9 10 11 |
2016-03-17 17:23:16,426 FATAL event.AsyncDispatcher (AsyncDispatcher.java:dispatch(181)) - Error in dispatcher thread java.lang.NullPointerException at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:345) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150) at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:449) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.addResource(ResourceLocalizationService.java:797) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:704) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:646) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:175) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:108) at java.lang.Thread.run(Thread.java:745) |
Source: my very own experience :)
God Bless You. After spending hours in troubleshoot, with vague error msg of AllocatorPerContext.getLocalPathForWrite, I had suspected “insufficient space” issue. But when I read your post confirming it, I felt relieved. Thanks.
Changing yarn params to extend space limit (although not recommended) fixed it….
yarn.nodemanager.disk-health-checker.min-healthy-disks => 0.01
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage => 99