Skip to content

Conversation

@amahussein
Copy link
Contributor

NOTICE

Please create an issue in ASF JIRA before opening a pull request,
and you need to set the title of the pull request which starts with
the corresponding JIRA issue number. (e.g. HADOOP-XXXXX. Fix a typo in YYY.)
For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 4s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 0m 0s test4tests The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 32m 25s trunk passed
+1 💚 compile 1m 19s trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 compile 1m 10s trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+1 💚 checkstyle 0m 46s trunk passed
+1 💚 mvnsite 1m 19s trunk passed
+1 💚 shadedclient 18m 46s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 53s trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 javadoc 1m 19s trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+0 🆗 spotbugs 3m 12s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 3m 10s trunk passed
_ Patch Compile Tests _
+1 💚 mvninstall 1m 12s the patch passed
+1 💚 compile 1m 13s the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 javac 1m 13s the patch passed
+1 💚 compile 1m 7s the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+1 💚 javac 1m 7s the patch passed
-0 ⚠️ checkstyle 0m 43s /diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 mvnsite 1m 26s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 shadedclient 16m 51s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 50s the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 javadoc 1m 22s the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+1 💚 findbugs 3m 22s the patch passed
_ Other Tests _
-1 ❌ unit 122m 44s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 asflicense 0m 39s The patch does not generate ASF License warnings.
215m 26s
Reason Tests
Failed junit tests hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestDFSClientRetries
hadoop.hdfs.TestMultipleNNPortQOP
Subsystem Report/Notes
Docker ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/1/artifact/out/Dockerfile
GITHUB PR #2408
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux b2149aa4f159 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 74634eb
Default Java Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/1/testReport/
Max. process+thread count 2826 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/1/console
versions git=2.17.1 maven=3.6.0 findbugs=4.1.3
Powered by Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@aajisaka
Copy link
Member

hadoop.hdfs.TestFileChecksumCompositeCrc and hadoop.hdfs.TestFileChecksum are still failing, are the failures related to the patch?

@amahussein
Copy link
Contributor Author

hadoop.hdfs.TestFileChecksumCompositeCrc and hadoop.hdfs.TestFileChecksum are still failing, are the failures related to the patch?

Thank you Akira.
It is not due to the patch. Probably, it needs some configurations to set the timeouts of the pending reconstruction blocks. I will investigate the stack and see if I can fix it.

 org.apache.hadoop.hdfs.TestFileChecksumCompositeCrc.testStripedFileChecksumWithMissedDataBlocksRangeQuery15
 Error Details

`/striped/stripedFileChecksum1': Fail to get block checksum for LocatedStripedBlock{BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775792_1001; getBlockSize()=37748736; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[127.0.0.1:41259,DS-211baa4b-9658-4792-9eb9-971650316b65,DISK], DatanodeInfoWithStorage[127.0.0.1:42903,DS-c7b78dc0-e09e-4368-bc48-7e2d148acb2f,DISK], DatanodeInfoWithStorage[127.0.0.1:35419,DS-4697e810-71cb-44b4-a4f0-e8c105a0d30b,DISK], DatanodeInfoWithStorage[127.0.0.1:39685,DS-c210f999-d67c-490d-b675-e286b0abd6ee,DISK], DatanodeInfoWithStorage[127.0.0.1:44039,DS-51a33914-b87e-43ef-ae14-13caa38ff319,DISK], DatanodeInfoWithStorage[127.0.0.1:39239,DS-e9764209-9d9c-4f2f-843c-183cb415a2ec,DISK], DatanodeInfoWithStorage[127.0.0.1:35165,DS-9b8fb76f-5d5f-412e-b148-c482489342d9,DISK], DatanodeInfoWithStorage[127.0.0.1:44431,DS-dc88f6f1-a3e5-407e-84a7-6e455bb6fdd1,DISK]]; indices=[0, 1, 2, 3, 4, 6, 7, 8]}
 Stack Trace

org.apache.hadoop.fs.PathIOException: `/striped/stripedFileChecksum1': Fail to get block checksum for LocatedStripedBlock{BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775792_1001; getBlockSize()=37748736; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[127.0.0.1:41259,DS-211baa4b-9658-4792-9eb9-971650316b65,DISK], DatanodeInfoWithStorage[127.0.0.1:42903,DS-c7b78dc0-e09e-4368-bc48-7e2d148acb2f,DISK], DatanodeInfoWithStorage[127.0.0.1:35419,DS-4697e810-71cb-44b4-a4f0-e8c105a0d30b,DISK], DatanodeInfoWithStorage[127.0.0.1:39685,DS-c210f999-d67c-490d-b675-e286b0abd6ee,DISK], DatanodeInfoWithStorage[127.0.0.1:44039,DS-51a33914-b87e-43ef-ae14-13caa38ff319,DISK], DatanodeInfoWithStorage[127.0.0.1:39239,DS-e9764209-9d9c-4f2f-843c-183cb415a2ec,DISK], DatanodeInfoWithStorage[127.0.0.1:35165,DS-9b8fb76f-5d5f-412e-b148-c482489342d9,DISK], DatanodeInfoWithStorage[127.0.0.1:44431,DS-dc88f6f1-a3e5-407e-84a7-6e455bb6fdd1,DISK]]; indices=[0, 1, 2, 3, 4, 6, 7, 8]}
	at org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:640)
	at org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
	at org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1851)
	at org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1871)
	at org.apache.hadoop.hdfs.DistributedFileSystem$34.doCall(DistributedFileSystem.java:1903)
	at org.apache.hadoop.hdfs.DistributedFileSystem$34.doCall(DistributedFileSystem.java:1900)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1917)
	at org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:592)
	at org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery(TestFileChecksum.java:306)
	at org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery15(TestFileChecksum.java:476)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:748)
 Standard Output

2020-10-23 22:41:06,342 [Listener at localhost/37463] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(529)) - starting cluster: numNameNodes=1, numDataNodes=11
2020-10-23 22:41:06,345 [Listener at localhost/37463] INFO  namenode.NameNode (NameNode.java:format(1249)) - Formatting using clusterid: testClusterID
2020-10-23 22:41:06,345 [Listener at localhost/37463] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(229)) - Edit logging is async:true
2020-10-23 22:41:06,354 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(828)) - KeyProvider: null
2020-10-23 22:41:06,354 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystemLock.java:<init>(131)) - fsLock is fair: true
2020-10-23 22:41:06,354 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystemLock.java:<init>(149)) - Detailed lock hold time metrics enabled: false
2020-10-23 22:41:06,355 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(861)) - fsOwner                = jenkins (auth:SIMPLE)
2020-10-23 22:41:06,355 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(862)) - supergroup             = supergroup
2020-10-23 22:41:06,355 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(863)) - isPermissionEnabled    = true
2020-10-23 22:41:06,355 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(864)) - isStoragePolicyEnabled = true
2020-10-23 22:41:06,355 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(875)) - HA Enabled: false
2020-10-23 22:41:06,356 [Listener at localhost/37463] INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-10-23 22:41:06,356 [Listener at localhost/37463] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(310)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2020-10-23 22:41:06,356 [Listener at localhost/37463] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(318)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2020-10-23 22:41:06,356 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(77)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2020-10-23 22:41:06,357 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(83)) - The block deletion will start around 2020 Oct 23 22:41:06
2020-10-23 22:41:06,357 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2020-10-23 22:41:06,357 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,357 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.9 GB = 38.8 MB
2020-10-23 22:41:06,357 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^22 = 4194304 entries
2020-10-23 22:41:06,360 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:createSPSManager(5439)) - Storage policy satisfier is disabled
2020-10-23 22:41:06,361 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(641)) - dfs.block.access.token.enable = true
2020-10-23 22:41:06,361 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(663)) - dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2020-10-23 22:41:06,361 [Listener at localhost/37463] INFO  block.BlockTokenSecretManager (BlockTokenSecretManager.java:<init>(154)) - Block token key range: [0, 2147483647)
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(160)) - dfs.namenode.safemode.threshold-pct = 0.999
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.min.datanodes = 0
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(163)) - dfs.namenode.safemode.extension = 0
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(627)) - defaultReplication         = 3
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(628)) - maxReplication             = 512
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(629)) - minReplication             = 1
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(630)) - maxReplicationStreams      = 0
2020-10-23 22:41:06,362 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(631)) - redundancyRecheckInterval  = 3000ms
2020-10-23 22:41:06,363 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(632)) - encryptDataTransfer        = false
2020-10-23 22:41:06,363 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(633)) - maxNumBlocksToLog          = 1000
2020-10-23 22:41:06,363 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2020-10-23 22:41:06,363 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,364 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.9 GB = 19.4 MB
2020-10-23 22:41:06,364 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2020-10-23 22:41:06,366 [Listener at localhost/37463] INFO  namenode.FSDirectory (FSDirectory.java:<init>(335)) - ACLs enabled? true
2020-10-23 22:41:06,366 [Listener at localhost/37463] INFO  namenode.FSDirectory (FSDirectory.java:<init>(339)) - POSIX ACL inheritance enabled? true
2020-10-23 22:41:06,366 [Listener at localhost/37463] INFO  namenode.FSDirectory (FSDirectory.java:<init>(343)) - XAttrs enabled? true
2020-10-23 22:41:06,366 [Listener at localhost/37463] INFO  namenode.NameNode (FSDirectory.java:<init>(406)) - Caching file names occurring more than 10 times
2020-10-23 22:41:06,366 [Listener at localhost/37463] INFO  snapshot.SnapshotManager (SnapshotManager.java:<init>(162)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotFSLimit: 65536, maxSnapshotLimit: 65536
2020-10-23 22:41:06,367 [Listener at localhost/37463] INFO  snapshot.SnapshotManager (SnapshotManager.java:<init>(175)) - dfs.namenode.snapshot.deletion.ordered = false
2020-10-23 22:41:06,367 [Listener at localhost/37463] INFO  snapshot.SnapshotManager (DirectoryDiffListFactory.java:init(43)) - SkipList is disabled
2020-10-23 22:41:06,367 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2020-10-23 22:41:06,367 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,368 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.9 GB = 4.8 MB
2020-10-23 22:41:06,368 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^19 = 524288 entries
2020-10-23 22:41:06,368 [Listener at localhost/37463] INFO  metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2020-10-23 22:41:06,369 [Listener at localhost/37463] INFO  metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10
2020-10-23 22:41:06,369 [Listener at localhost/37463] INFO  metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2020-10-23 22:41:06,369 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(1105)) - Retry cache on namenode is enabled
2020-10-23 22:41:06,369 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(1113)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2020-10-23 22:41:06,370 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2020-10-23 22:41:06,370 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,370 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.9 GB = 595.2 KB
2020-10-23 22:41:06,370 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^16 = 65536 entries
2020-10-23 22:41:06,371 [Listener at localhost/37463] INFO  namenode.FSImage (FSImage.java:format(185)) - Allocated new BlockPoolId: BP-1233294053-172.17.0.2-1603492866371
2020-10-23 22:41:06,466 [Listener at localhost/37463] INFO  common.Storage (NNStorage.java:format(595)) - Storage directory /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1 has been successfully formatted.
2020-10-23 22:41:06,543 [Listener at localhost/37463] INFO  common.Storage (NNStorage.java:format(595)) - Storage directory /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2 has been successfully formatted.
2020-10-23 22:41:06,557 [FSImageSaver for /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(717)) - Saving image file /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
2020-10-23 22:41:06,559 [FSImageSaver for /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(717)) - Saving image file /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
2020-10-23 22:41:06,583 [FSImageSaver for /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(721)) - Image file /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 454 bytes saved in 0 seconds .
2020-10-23 22:41:06,583 [FSImageSaver for /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(721)) - Image file /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 454 bytes saved in 0 seconds .
2020-10-23 22:41:06,638 [Listener at localhost/37463] INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 1 images with txid >= 0
2020-10-23 22:41:06,639 [Listener at localhost/37463] INFO  namenode.NameNode (NameNode.java:createNameNode(1700)) - createNameNode []
2020-10-23 22:41:06,642 [Listener at localhost/37463] INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(120)) - Loaded properties from hadoop-metrics2.properties
2020-10-23 22:41:06,649 [Listener at localhost/37463] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(378)) - Scheduled Metric snapshot period at 0 second(s).
2020-10-23 22:41:06,649 [Listener at localhost/37463] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started
2020-10-23 22:41:06,651 [Listener at localhost/37463] INFO  namenode.NameNodeUtils (NameNodeUtils.java:getClientNamenodeAddress(79)) - fs.defaultFS is hdfs://127.0.0.1:0
2020-10-23 22:41:06,666 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@6d6bff89] INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2020-10-23 22:41:06,668 [Listener at localhost/37463] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1668)) - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
2020-10-23 22:41:06,669 [Listener at localhost/37463] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1690)) - Starting Web-server for hdfs at: http://localhost:0
2020-10-23 22:41:06,669 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:getWebAppsPath(1141)) - Web server is in development mode. Resources will be read from the source tree.
2020-10-23 22:41:06,671 [Listener at localhost/37463] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-10-23 22:41:06,672 [Listener at localhost/37463] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(82)) - Http request log for http.requests.namenode is not defined
2020-10-23 22:41:06,672 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:getWebAppsPath(1141)) - Web server is in development mode. Resources will be read from the source tree.
2020-10-23 22:41:06,674 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(1059)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-10-23 22:41:06,675 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1032)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2020-10-23 22:41:06,675 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1042)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-10-23 22:41:06,675 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1042)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-10-23 22:41:06,677 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1032)) - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
2020-10-23 22:41:06,677 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1042)) - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context logs
2020-10-23 22:41:06,677 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1042)) - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
2020-10-23 22:41:06,678 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(886)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2020-10-23 22:41:06,679 [Listener at localhost/37463] INFO  http.HttpServer2 (HttpServer2.java:bindListener(1280)) - Jetty bound to port 42767
2020-10-23 22:41:06,679 [Listener at localhost/37463] INFO  server.Server (Server.java:doStart(359)) - jetty-9.4.20.v20190813; built: 2019-08-13T21:28:18.144Z; git: 84700530e645e812b336747464d6fbbf370c9a20; jvm 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
2020-10-23 22:41:06,693 [Listener at localhost/37463] INFO  server.session (DefaultSessionIdManager.java:doStart(333)) - DefaultSessionIdManager workerName=node0
2020-10-23 22:41:06,693 [Listener at localhost/37463] INFO  server.session (DefaultSessionIdManager.java:doStart(338)) - No SessionScavenger set, using defaults
2020-10-23 22:41:06,693 [Listener at localhost/37463] INFO  server.session (HouseKeeper.java:startScavenging(140)) - node0 Scavenging every 600000ms
2020-10-23 22:41:06,695 [Listener at localhost/37463] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-10-23 22:41:06,696 [Listener at localhost/37463] INFO  handler.ContextHandler (ContextHandler.java:doStart(824)) - Started o.e.j.s.ServletContextHandler@61149fa5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE}
2020-10-23 22:41:06,696 [Listener at localhost/37463] INFO  handler.ContextHandler (ContextHandler.java:doStart(824)) - Started o.e.j.s.ServletContextHandler@f2d890c{static,/static,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/,AVAILABLE}
2020-10-23 22:41:06,703 [Listener at localhost/37463] INFO  handler.ContextHandler (ContextHandler.java:doStart(824)) - Started o.e.j.w.WebAppContext@236fdf{hdfs,/,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs}
2020-10-23 22:41:06,706 [Listener at localhost/37463] INFO  server.AbstractConnector (AbstractConnector.java:doStart(293)) - Started ServerConnector@49c099b{HTTP/1.1,[http/1.1]}{localhost:42767}
2020-10-23 22:41:06,706 [Listener at localhost/37463] INFO  server.Server (Server.java:doStart(399)) - Started @61146ms
2020-10-23 22:41:06,708 [Listener at localhost/37463] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(229)) - Edit logging is async:true
2020-10-23 22:41:06,720 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(828)) - KeyProvider: null
2020-10-23 22:41:06,720 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystemLock.java:<init>(131)) - fsLock is fair: true
2020-10-23 22:41:06,721 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystemLock.java:<init>(149)) - Detailed lock hold time metrics enabled: false
2020-10-23 22:41:06,721 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(861)) - fsOwner                = jenkins (auth:SIMPLE)
2020-10-23 22:41:06,721 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(862)) - supergroup             = supergroup
2020-10-23 22:41:06,721 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(863)) - isPermissionEnabled    = true
2020-10-23 22:41:06,721 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(864)) - isStoragePolicyEnabled = true
2020-10-23 22:41:06,722 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(875)) - HA Enabled: false
2020-10-23 22:41:06,722 [Listener at localhost/37463] INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-10-23 22:41:06,722 [Listener at localhost/37463] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(310)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2020-10-23 22:41:06,723 [Listener at localhost/37463] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(318)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2020-10-23 22:41:06,723 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(77)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2020-10-23 22:41:06,724 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(83)) - The block deletion will start around 2020 Oct 23 22:41:06
2020-10-23 22:41:06,724 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2020-10-23 22:41:06,724 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,724 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.9 GB = 38.8 MB
2020-10-23 22:41:06,725 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^22 = 4194304 entries
2020-10-23 22:41:06,728 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:createSPSManager(5439)) - Storage policy satisfier is disabled
2020-10-23 22:41:06,728 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(641)) - dfs.block.access.token.enable = true
2020-10-23 22:41:06,728 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(663)) - dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2020-10-23 22:41:06,729 [Listener at localhost/37463] INFO  block.BlockTokenSecretManager (BlockTokenSecretManager.java:<init>(154)) - Block token key range: [0, 2147483647)
2020-10-23 22:41:06,729 [Listener at localhost/37463] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(160)) - dfs.namenode.safemode.threshold-pct = 0.999
2020-10-23 22:41:06,729 [Listener at localhost/37463] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.min.datanodes = 0
2020-10-23 22:41:06,730 [Listener at localhost/37463] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(163)) - dfs.namenode.safemode.extension = 0
2020-10-23 22:41:06,730 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(627)) - defaultReplication         = 3
2020-10-23 22:41:06,730 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(628)) - maxReplication             = 512
2020-10-23 22:41:06,730 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(629)) - minReplication             = 1
2020-10-23 22:41:06,730 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(630)) - maxReplicationStreams      = 0
2020-10-23 22:41:06,730 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(631)) - redundancyRecheckInterval  = 3000ms
2020-10-23 22:41:06,731 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(632)) - encryptDataTransfer        = false
2020-10-23 22:41:06,731 [Listener at localhost/37463] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(633)) - maxNumBlocksToLog          = 1000
2020-10-23 22:41:06,731 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2020-10-23 22:41:06,731 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,732 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.9 GB = 19.4 MB
2020-10-23 22:41:06,732 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2020-10-23 22:41:06,734 [Listener at localhost/37463] INFO  namenode.FSDirectory (FSDirectory.java:<init>(335)) - ACLs enabled? true
2020-10-23 22:41:06,734 [Listener at localhost/37463] INFO  namenode.FSDirectory (FSDirectory.java:<init>(339)) - POSIX ACL inheritance enabled? true
2020-10-23 22:41:06,734 [Listener at localhost/37463] INFO  namenode.FSDirectory (FSDirectory.java:<init>(343)) - XAttrs enabled? true
2020-10-23 22:41:06,734 [Listener at localhost/37463] INFO  namenode.NameNode (FSDirectory.java:<init>(406)) - Caching file names occurring more than 10 times
2020-10-23 22:41:06,734 [Listener at localhost/37463] INFO  snapshot.SnapshotManager (SnapshotManager.java:<init>(162)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotFSLimit: 65536, maxSnapshotLimit: 65536
2020-10-23 22:41:06,735 [Listener at localhost/37463] INFO  snapshot.SnapshotManager (SnapshotManager.java:<init>(175)) - dfs.namenode.snapshot.deletion.ordered = false
2020-10-23 22:41:06,735 [Listener at localhost/37463] INFO  snapshot.SnapshotManager (DirectoryDiffListFactory.java:init(43)) - SkipList is disabled
2020-10-23 22:41:06,735 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2020-10-23 22:41:06,735 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,736 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.9 GB = 4.8 MB
2020-10-23 22:41:06,736 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^19 = 524288 entries
2020-10-23 22:41:06,737 [Listener at localhost/37463] INFO  metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2020-10-23 22:41:06,737 [Listener at localhost/37463] INFO  metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10
2020-10-23 22:41:06,737 [Listener at localhost/37463] INFO  metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2020-10-23 22:41:06,737 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(1105)) - Retry cache on namenode is enabled
2020-10-23 22:41:06,738 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(1113)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2020-10-23 22:41:06,738 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2020-10-23 22:41:06,738 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2020-10-23 22:41:06,738 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.9 GB = 595.2 KB
2020-10-23 22:41:06,739 [Listener at localhost/37463] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^16 = 65536 entries
2020-10-23 22:41:06,778 [Listener at localhost/37463] INFO  common.Storage (Storage.java:tryLock(928)) - Lock on /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/in_use.lock acquired by nodename 674@b2149aa4f159
2020-10-23 22:41:06,829 [Listener at localhost/37463] INFO  common.Storage (Storage.java:tryLock(928)) - Lock on /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/in_use.lock acquired by nodename 674@b2149aa4f159
2020-10-23 22:41:06,831 [Listener at localhost/37463] INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(428)) - Recovering unfinalized segments in /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current
2020-10-23 22:41:06,831 [Listener at localhost/37463] INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(428)) - Recovering unfinalized segments in /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current
2020-10-23 22:41:06,832 [Listener at localhost/37463] INFO  namenode.FSImage (FSImage.java:loadFSImage(733)) - No edit log streams selected.
2020-10-23 22:41:06,832 [Listener at localhost/37463] INFO  namenode.FSImage (FSImage.java:loadFSImageFile(799)) - Planning to load image: FSImageFile(file=/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2020-10-23 22:41:06,833 [Listener at localhost/37463] INFO  namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSectionHeader(413)) - Loading 1 INodes.
2020-10-23 22:41:06,837 [Listener at localhost/37463] INFO  namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(371)) - Successfully loaded 1 inodes
2020-10-23 22:41:06,837 [Listener at localhost/37463] INFO  namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:waitBlocksMapAndNameCacheUpdateFinished(344)) - Completed update blocks map and name cache, total waiting duration 0ms.
2020-10-23 22:41:06,837 [Listener at localhost/37463] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(253)) - Loaded FSImage in 0 seconds.
2020-10-23 22:41:06,838 [Listener at localhost/37463] INFO  namenode.FSImage (FSImage.java:loadFSImage(977)) - Loaded image for txid 0 from /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/fsimage_0000000000000000000
2020-10-23 22:41:06,838 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1227)) - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2020-10-23 22:41:06,840 [Listener at localhost/37463] INFO  namenode.FSEditLog (FSEditLog.java:startLogSegment(1391)) - Starting log segment at 1
2020-10-23 22:41:06,956 [Listener at localhost/37463] INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
2020-10-23 22:41:06,956 [Listener at localhost/37463] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(800)) - Finished loading FSImage in 217 msecs
2020-10-23 22:41:06,957 [Listener at localhost/37463] INFO  namenode.NameNode (NameNodeRpcServer.java:<init>(454)) - RPC server is binding to localhost:0
2020-10-23 22:41:06,957 [Listener at localhost/37463] INFO  namenode.NameNode (NameNodeRpcServer.java:<init>(459)) - Enable NameNode state context:false
2020-10-23 22:41:06,957 [Listener at localhost/37463] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(92)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2020-10-23 22:41:06,958 [Socket Reader #1 for port 0] INFO  ipc.Server (Server.java:run(1250)) - Starting Socket Reader #1 for port 0
2020-10-23 22:41:06,961 [Listener at localhost/36353] INFO  namenode.NameNode (NameNode.java:initialize(774)) - Clients are to use localhost:36353 to access this namenode/service.
2020-10-23 22:41:06,962 [Listener at localhost/36353] INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(5508)) - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
2020-10-23 22:41:07,022 [Listener at localhost/36353] INFO  namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(178)) - Number of blocks under construction: 0
2020-10-23 22:41:07,023 [Listener at localhost/36353] INFO  blockmanagement.DatanodeAdminDefaultMonitor (DatanodeAdminDefaultMonitor.java:processConf(116)) - Initialized the Default Decommission and Maintenance monitor
2020-10-23 22:41:07,025 [org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@1d3c112a] INFO  block.BlockTokenSecretManager (BlockTokenSecretManager.java:updateKeys(258)) - Updating block keys
2020-10-23 22:41:07,027 [Listener at localhost/36353] INFO  blockmanagement.BlockManager (BlockManager.java:initializeReplQueues(5179)) - initializing replication queues
2020-10-23 22:41:07,028 [Listener at localhost/36353] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(408)) - STATE* Leaving safe mode after 0 secs
2020-10-23 22:41:07,029 [Listener at localhost/36353] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(414)) - STATE* Network topology has 0 racks and 0 datanodes
2020-10-23 22:41:07,029 [Listener at localhost/36353] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(416)) - STATE* UnderReplicatedBlocks has 0 blocks
2020-10-23 22:41:07,040 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1492)) - IPC Server Responder: starting
2020-10-23 22:41:07,041 [IPC Server listener on 0] INFO  ipc.Server (Server.java:run(1329)) - IPC Server listener on 0: starting
2020-10-23 22:41:07,080 [Listener at localhost/36353] INFO  namenode.NameNode (NameNode.java:startCommonServices(891)) - NameNode RPC up at: localhost/127.0.0.1:36353
2020-10-23 22:41:07,081 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3721)) - Total number of blocks            = 0
2020-10-23 22:41:07,081 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3722)) - Number of invalid blocks          = 0
2020-10-23 22:41:07,081 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3723)) - Number of under-replicated blocks = 0
2020-10-23 22:41:07,081 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3724)) - Number of  over-replicated blocks = 0
2020-10-23 22:41:07,081 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3726)) - Number of blocks being written    = 0
2020-10-23 22:41:07,081 [Reconstruction Queue Initializer] INFO  hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3729)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 52 msec
2020-10-23 22:41:07,081 [Listener at localhost/36353] INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1339)) - Starting services required for active state
2020-10-23 22:41:07,081 [Listener at localhost/36353] INFO  namenode.FSDirectory (FSDirectory.java:updateCountForQuota(849)) - Initializing quota with 4 thread(s)
2020-10-23 22:41:07,083 [Listener at localhost/36353] INFO  namenode.FSDirectory (FSDirectory.java:updateCountForQuota(858)) - Quota initialization completed in 1 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, NVDIMM=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
2020-10-23 22:41:07,096 [CacheReplicationMonitor(1368622310)] INFO  blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor with interval 30000 milliseconds
2020-10-23 22:41:07,546 [Listener at localhost/36353] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1701)) - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2
2020-10-23 22:41:07,547 [Listener at localhost/36353] INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1
2020-10-23 22:41:07,548 [Listener at localhost/36353] INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2
2020-10-23 22:41:07,561 [Listener at localhost/36353] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again)
2020-10-23 22:41:07,562 [Listener at localhost/36353] INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-10-23 22:41:07,562 [Listener at localhost/36353] INFO  datanode.BlockScanner (BlockScanner.java:<init>(201)) - Initialized block scanner with targetBytesPerSec 1048576
2020-10-23 22:41:07,562 [Listener at localhost/36353] INFO  datanode.DataNode (DataNode.java:<init>(512)) - Configured hostname is 127.0.0.1
2020-10-23 22:41:07,563 [Listener at localhost/36353] INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-10-23 22:41:07,563 [Listener at localhost/36353] INFO  datanode.DataNode (DataNode.java:startDataNode(1422)) - Starting DataNode with maxLockedMemory = 0
2020-10-23 22:41:07,564 [Listener at localhost/36353] INFO  datanode.DataNode (DataNode.java:initDataXceiver(1170)) - Opened streaming server at /127.0.0.1:39239
2020-10-23 22:41:07,564 [Listener at localhost/36353] INFO  datanode.DataNode (DataXceiverServer.java:<init>(94)) - Balancing bandwidth is 104857600 bytes/s
2020-10-23 22:41:07,564 [Listener at localhost/36353] INFO  datanode.DataNode (DataXceiverServer.java:<init>(95)) - Number threads for balancing is 100
2020-10-23 22:41:07,573 [Listener at localhost/36353] INFO  http.HttpServer2 (HttpServer2.java:getWebAppsPath(1141)) - Web server is in development mode. Resources will be read from the source tree.
2020-10-23 22:41:07,575 [Listener at localhost/36353] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-10-23 22:41:07,577 [Listener at localhost/36353] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(82)) - Http request log for http.requests.datanode is not defined
2020-10-23 22:41:07,577 [Listener at localhost/36353] INFO  http.HttpServer2 (HttpServer2.java:getWebAppsPath(1141)) - Web server is in development mode. Resources will be read from the source tree.
2020-10-23 22:41:07,579 [Listener at localhost/36353] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(1059)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-10-23 22:41:07,580 [Listener at localhost/36353] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1032)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2020-10-23 22:41:07,580 [Listener at localhost/36353] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1042)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-10-23 22:41:07,580 [Listener at localhost/36353] INFO  http.HttpServer2 (HttpServer2.java:addFilter(1042)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-10-23 22:41:07,581 [Listener at localhost/36353] INFO  http.HttpServer2 (HttpServer2.java:bindListener(1280)) - Jetty bound to port 37573
2020-10-23 22:41:07,581 [Listener at localhost/36353] INFO  server.Server (Server.java:doStart(359)) - jetty-9.4.20.v20190813; built: 2019-08-13T21:28:18.144Z; git: 84700530e645e812b336747464d6fbbf370c9a20; jvm 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
2020-10-23 22:41:07,595 [Listener at localhost/36353] INFO  server.session (DefaultSessionIdManager.java:doStart(333)) - DefaultSessionIdManager workerName=node0
2020-10-23 22:41:07,595 [Listener at localhost/36353] INFO  server.session (DefaultSessionIdManager.java:doStart(338)) - No SessionScavenger set, using defaults
2020-10-23 22:41:07,596 [Listener at localhost/36353] INFO  server.session (HouseKeeper.java:startScavenging(140)) - node0 Scavenging every 660000ms
2020-10-23 22:41:07,596 [Listener at localhost/36353] INFO  handler.ContextHandler (ContextHandler.java:doStart(824)) - Started o.e.j.s.ServletContextHandler@3380ca3d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE}
2020-10-23 22:41:07,597 [Listener at localhost/36353] INFO  handler.ContextHandler (ContextHandler.java:doStart(824)) - Started o.e.j.s.ServletContextHandler@c2df90e{static,/static,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/,AVAILABLE}
2020-10-23 22:41:07,602 [Listener at localhost/36353] INFO  handler.ContextHandler (ContextHandler.java:doStart(824)) - Started o.e.j.w.WebAppContext@385d819{datanode,/,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode}
2020-10-23 22:41:07,603 [Listener at localhost/36353] INFO  server.AbstractConnector (AbstractConnector.java:doStart(293)) - Started ServerConnector@87d9a01{HTTP/1.1,[http/1.1]}{localhost:37573}
2020-10-23 22:41:07,603 [Listener at localhost/36353] INFO  server.Server (Server.java:doStart(399)) - Started @62042ms
2020-10-23 22:41:07,607 [Listener at localhost/36353] WARN  web.DatanodeHttpServer (RestCsrfPreventionFilterHandler.java:<init>(75)) - Got null for restCsrfPreventionFilter - will not do any filtering.
2020-10-23 22:41:07,617 [Listener at localhost/36353] INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(322)) - Listening HTTP traffic on /127.0.0.1:43455
2020-10-23 22:41:07,619 [Listener at localhost/36353] INFO  datanode.DataNode (DataNode.java:startDataNode(1450)) - dnUserName = jenkins
2020-10-23 22:41:07,619 [Listener at localhost/36353] INFO  datanode.DataNode (DataNode.java:startDataNode(1451)) - supergroup = supergroup
2020-10-23 22:41:07,620 [Listener at localhost/36353] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(92)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2020-10-23 22:41:07,619 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@6d963d70] INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2020-10-23 22:41:07,621 [Socket Reader #1 for port 0] INFO  ipc.Server (Server.java:run(1250)) - Starting Socket Reader #1 for port 0
2020-10-23 22:41:07,623 [Listener at localhost/46345] INFO  datanode.DataNode (DataNode.java:initIpcServer(1057)) - Opened IPC server at /127.0.0.1:46345
2020-10-23 22:41:07,636 [Listener at localhost/46345] INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
2020-10-23 22:41:07,637 [Listener at localhost/46345] INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(210)) - Starting BPOfferServices for nameservices: <default>
2020-10-23 22:41:07,642 [Thread-4271] INFO  datanode.DataNode (BPServiceActor.java:run(846)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:36353 starting to offer service
2020-10-23 22:41:07,645 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1492)) - IPC Server Responder: starting
2020-10-23 22:41:07,649 [IPC Server listener on 0] INFO  ipc.Server (Server.java:run(1329)) - IPC Server listener on 0: starting
2020-10-23 22:41:07,701 [Thread-4271] INFO  datanode.DataNode (BPOfferService.java:verifyAndSetNamespaceInfo(380)) - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:36353
2020-10-23 22:41:07,701 [Thread-4271] INFO  common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(356)) - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
2020-10-23 22:41:07,742 [Listener at localhost/46345] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1701)) - Starting DataNode 1 with dfs.datanode.data.dir: [DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data3,[DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4
2020-10-23 22:41:07,743 [Listener at localhost/46345] INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data3
2020-10-23 22:41:07,744 [Listener at localhost/46345] INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4
2020-10-23 22:41:07,749 [Thread-4271] INFO  common.Storage (Storage.java:tryLock(928)) - Lock on /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1/in_use.lock acquired by nodename 674@b2149aa4f159
2020-10-23 22:41:07,749 [Thread-4271] INFO  common.Storage (DataStorage.java:loadStorageDirectory(284)) - Storage directory with location [DISK]file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1 is not formatted for namespace 593244777. Formatting...
2020-10-23 22:41:07,749 [Thread-4271] INFO  common.Storage (DataStorage.java:createStorageID(160)) - Generated new storageID DS-e9764209-9d9c-4f2f-843c-183cb415a2ec for directory /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1 
2020-10-23 22:41:07,749 [Listener at localhost/46345] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again)
2020-10-23 22:41:07,750 [Listener at localhost/46345] INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-10-23 22:41:07,750 [Listener at localhost/46345] INFO  datanode.BlockScanner (BlockScanner.java:<init>(201)) - Initialized block scanner with targetBytesPerSec 1048576
2020-10-23 22:41:07,751 [Listener at localhost/46345] INFO  datanode.DataNode (DataNode.java:<init>(512)) - Configured hostname is 127.0.0.1
2020-10-23 22:41:07,751 [Listener at localhost/46345] INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-10-23 22:41:07,751 [Listener at localhost/46345] INFO  datanode.DataNode (DataNode.java:startDataNode(1422)) - Starting DataNode with maxLockedMemory = 0
2020-10-23 22:41:07,752 [Listener at localhost/46345] INFO  datanode.DataNode (DataNode.java:initDataXceiver(1170)) - Opened streaming server at /127.0.0.1:35165
2020-10-23 22:41:07,752 [Listener at localhost/46345] INFO  datanode.DataNode (DataXceiverServer.java:<init>(94)) - Balancing bandwidth is 104857600 bytes/s
2020-10-23 22:41:07,752 [Listener at localhost/46345] INFO  datanode.DataNode (DataXceiverServer.java:<init>(95)) - Number threads for balancing is 100
2020-10-23 22:41:07,755 [Listener at localhost/46345] INFO  http.HttpServer2 (HttpServer2.java:getWebAppsPath(1141)) - Web server is in development mode. Resources will be read from the source tree.
2020-10-23 22:41:07,757 [Listener at localhost/46345] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-10-23 22:41:07,775 [Listener at localhost/46345] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(82)) - Http request log for http.requests.datanode is not defined
2020-10-23 22:41:07,775 [Listener at localhost/46345] INFO  http.HttpServer2 (HttpServer2.java:getWebAppsPath(1141)) - Web server is in development mode. Resources will be read from the source tree.
2020-10-23 22:41:07,778 [Li
...[truncated 774850 chars]...
b415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,340 [DataXceiver for client  at /127.0.0.1:40640 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (BlockSender.java:sendPacket(655)) - BlockSender.sendChunks() exception: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479720 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,341 [DataXceiver for client  at /127.0.0.1:40640 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] INFO  datanode.BlockScanner (BlockScanner.java:markSuspectBlock(369)) - Not scanning suspicious block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 on DS-e9764209-9d9c-4f2f-843c-183cb415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,340 [DataXceiver for client  at /127.0.0.1:40616 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (BlockSender.java:sendPacket(655)) - BlockSender.sendChunks() exception: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479693 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,341 [DataXceiver for client  at /127.0.0.1:40616 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] INFO  datanode.BlockScanner (BlockScanner.java:markSuspectBlock(369)) - Not scanning suspicious block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 on DS-e9764209-9d9c-4f2f-843c-183cb415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,340 [DataXceiver for client  at /127.0.0.1:40490 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (BlockSender.java:sendPacket(655)) - BlockSender.sendChunks() exception: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479440 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,340 [DataXceiver for client  at /127.0.0.1:40564 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (BlockSender.java:sendPacket(655)) - BlockSender.sendChunks() exception: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479528 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,340 [DataXceiver for client  at /127.0.0.1:40588 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (BlockSender.java:sendPacket(655)) - BlockSender.sendChunks() exception: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479561 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,340 [DataXceiver for client  at /127.0.0.1:40540 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (BlockSender.java:sendPacket(655)) - BlockSender.sendChunks() exception: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479500 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,340 [Listener at localhost/45103] INFO  ipc.Server (Server.java:stop(3400)) - Stopping server on 46345
2020-10-23 22:41:18,340 [DataXceiver for client  at /127.0.0.1:40464 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (BlockSender.java:sendPacket(655)) - BlockSender.sendChunks() exception: 
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479392 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,341 [DataXceiver for client  at /127.0.0.1:40490 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] INFO  datanode.BlockScanner (BlockScanner.java:markSuspectBlock(369)) - Not scanning suspicious block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 on DS-e9764209-9d9c-4f2f-843c-183cb415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,343 [DataXceiver for client  at /127.0.0.1:40490 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40490
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479440 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,343 [Command processor] ERROR datanode.DataNode (BPServiceActor.java:processQueue(1329)) - Command processor encountered interrupt and exit.
2020-10-23 22:41:18,341 [DataXceiver for client  at /127.0.0.1:40616 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40616
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479693 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,341 [DataXceiver for client  at /127.0.0.1:40640 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40640
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479720 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,341 [DataXceiver for client  at /127.0.0.1:40514 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40514
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479468 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,343 [DataXceiver for client  at /127.0.0.1:40514 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40514 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479468 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,343 [DataXceiver for client  at /127.0.0.1:40640 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40640 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479720 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,343 [DataXceiver for client  at /127.0.0.1:40616 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40616 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479693 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,343 [DataXceiver for client  at /127.0.0.1:40490 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40490 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479440 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,343 [BP-1233294053-172.17.0.2-1603492866371 heartbeating to localhost/127.0.0.1:36353] WARN  datanode.IncrementalBlockReportManager (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - IncrementalBlockReportManager interrupted
2020-10-23 22:41:18,343 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1497)) - Stopping IPC Server Responder
2020-10-23 22:41:18,343 [DataXceiver for client  at /127.0.0.1:40464 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] INFO  datanode.BlockScanner (BlockScanner.java:markSuspectBlock(369)) - Not scanning suspicious block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 on DS-e9764209-9d9c-4f2f-843c-183cb415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,342 [IPC Server listener on 0] INFO  ipc.Server (Server.java:run(1361)) - Stopping IPC Server listener on 0
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40464 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40464
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479392 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40540 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] INFO  datanode.BlockScanner (BlockScanner.java:markSuspectBlock(369)) - Not scanning suspicious block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 on DS-e9764209-9d9c-4f2f-843c-183cb415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40588 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] INFO  datanode.BlockScanner (BlockScanner.java:markSuspectBlock(369)) - Not scanning suspicious block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 on DS-e9764209-9d9c-4f2f-843c-183cb415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,344 [BP-1233294053-172.17.0.2-1603492866371 heartbeating to localhost/127.0.0.1:36353] WARN  datanode.DataNode (BPServiceActor.java:run(889)) - Ending block pool service for: Block pool BP-1233294053-172.17.0.2-1603492866371 (Datanode Uuid 558a8b95-8213-4cf4-913d-744173bf0ab0) service to localhost/127.0.0.1:36353
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40588 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40588
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479561 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40564 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] INFO  datanode.BlockScanner (BlockScanner.java:markSuspectBlock(369)) - Not scanning suspicious block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 on DS-e9764209-9d9c-4f2f-843c-183cb415a2ec, because there is no volume scanner for that storageId.
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40564 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40564
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479528 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,346 [DataXceiver for client  at /127.0.0.1:40564 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40564 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479528 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40540 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] WARN  datanode.DataNode (DataXceiver.java:readBlock(646)) - DatanodeRegistration(127.0.0.1:39239, datanodeUuid=558a8b95-8213-4cf4-913d-744173bf0ab0, infoPort=43455, infoSecurePort=0, ipcPort=46345, storageInfo=lv=-57;cid=testClusterID;nsid=593244777;c=1603492866371):Got exception while serving BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001 to /127.0.0.1:40540
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479500 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40464 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40464 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479392 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,346 [DataXceiver for client  at /127.0.0.1:40540 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40540 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479500 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,346 [BP-1233294053-172.17.0.2-1603492866371 heartbeating to localhost/127.0.0.1:36353] INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1233294053-172.17.0.2-1603492866371 (Datanode Uuid 558a8b95-8213-4cf4-913d-744173bf0ab0)
2020-10-23 22:41:18,346 [BP-1233294053-172.17.0.2-1603492866371 heartbeating to localhost/127.0.0.1:36353] INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2950)) - Removing block pool BP-1233294053-172.17.0.2-1603492866371
2020-10-23 22:41:18,347 [refreshUsed-/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1/current/BP-1233294053-172.17.0.2-1603492866371] WARN  fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(211)) - Thread Interrupted waiting to refresh disk information: sleep interrupted
2020-10-23 22:41:18,345 [DataXceiver for client  at /127.0.0.1:40588 [Sending block BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775786_1001]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:39239:DataXceiver error processing READ_BLOCK operation  src: /127.0.0.1:40588 dst: /127.0.0.1:39239
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. Total timeout mills is 480000, 479561 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)
	at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:245)
	at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:171)
	at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:219)
	at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:278)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
	at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
	at java.lang.Thread.run(Thread.java:748)
2020-10-23 22:41:18,348 [refreshUsed-/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2/current/BP-1233294053-172.17.0.2-1603492866371] WARN  fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(211)) - Thread Interrupted waiting to refresh disk information: sleep interrupted
2020-10-23 22:41:18,348 [Listener at localhost/45103] INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(193)) - Shutting down all async disk service threads
2020-10-23 22:41:18,348 [Listener at localhost/45103] INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(201)) - All async disk service threads have been shut down
2020-10-23 22:41:18,348 [Listener at localhost/45103] INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(177)) - Shutting down all async lazy persist service threads
2020-10-23 22:41:18,348 [Listener at localhost/45103] INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(184)) - All async lazy persist service threads have been shut down
2020-10-23 22:41:18,348 [Listener at localhost/45103] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping DataNode metrics system...
2020-10-23 22:41:18,349 [Listener at localhost/45103] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - DataNode metrics system stopped.
2020-10-23 22:41:18,349 [Listener at localhost/45103] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(611)) - DataNode metrics system shutdown complete.
2020-10-23 22:41:18,350 [Listener at localhost/45103] INFO  datanode.DataNode (DataNode.java:shutdown(2193)) - Shutdown complete.
2020-10-23 22:41:18,350 [Listener at localhost/45103] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:stopAndJoinNameNode(2172)) - Shutting down the namenode
2020-10-23 22:41:18,350 [Listener at localhost/45103] INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1456)) - Stopping services started for active state
2020-10-23 22:41:18,350 [Listener at localhost/45103] INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1441)) - Ending log segment 1, 36
2020-10-23 22:41:18,351 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@f613067] INFO  namenode.FSNamesystem (FSNamesystem.java:run(4609)) - LazyPersistFileScrubber was interrupted, exiting
2020-10-23 22:41:18,351 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@1578b8ec] INFO  namenode.FSNamesystem (FSNamesystem.java:run(4513)) - NameNodeEditLogRoller was interrupted, exiting
2020-10-23 22:41:18,352 [Listener at localhost/45103] INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(778)) - Number of transactions: 37 Total time for transactions(ms): 13 Number of transactions batched in Syncs: 12 Number of syncs: 26 SyncTimes(ms): 3 1 
2020-10-23 22:41:18,352 [Listener at localhost/45103] INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(145)) - Finalizing edits file /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/edits_inprogress_0000000000000000001 -> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-1/current/edits_0000000000000000001-0000000000000000037
2020-10-23 22:41:18,353 [Listener at localhost/45103] INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(145)) - Finalizing edits file /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/edits_inprogress_0000000000000000001 -> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000037
2020-10-23 22:41:18,353 [FSEditLogAsync] INFO  namenode.FSEditLog (FSEditLogAsync.java:run(253)) - FSEditLogAsync was interrupted, exiting
2020-10-23 22:41:18,353 [CacheReplicationMonitor(1368622310)] INFO  blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(169)) - Shutting down CacheReplicationMonitor
2020-10-23 22:41:18,353 [Listener at localhost/45103] INFO  ipc.Server (Server.java:stop(3400)) - Stopping server on 36353
2020-10-23 22:41:18,354 [IPC Server listener on 0] INFO  ipc.Server (Server.java:run(1361)) - Stopping IPC Server listener on 0
2020-10-23 22:41:18,354 [RedundancyMonitor] INFO  blockmanagement.BlockManager (BlockManager.java:run(4944)) - Stopping RedundancyMonitor.
2020-10-23 22:41:18,360 [StorageInfoMonitor] INFO  blockmanagement.BlockManager (BlockManager.java:run(4979)) - Stopping thread.
2020-10-23 22:41:18,359 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1497)) - Stopping IPC Server Responder
2020-10-23 22:41:18,367 [Listener at localhost/45103] INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1456)) - Stopping services started for active state
2020-10-23 22:41:18,367 [Listener at localhost/45103] INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1557)) - Stopping services started for standby state
2020-10-23 22:41:18,368 [Listener at localhost/45103] INFO  handler.ContextHandler (ContextHandler.java:doStop(1007)) - Stopped o.e.j.w.WebAppContext@236fdf{hdfs,/,null,UNAVAILABLE}{file:/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs}
2020-10-23 22:41:18,370 [Listener at localhost/45103] INFO  server.AbstractConnector (AbstractConnector.java:doStop(343)) - Stopped ServerConnector@49c099b{HTTP/1.1,[http/1.1]}{localhost:0}
2020-10-23 22:41:18,370 [Listener at localhost/45103] INFO  server.session (HouseKeeper.java:stopScavenging(158)) - node0 Stopped scavenging
2020-10-23 22:41:18,370 [Listener at localhost/45103] INFO  handler.ContextHandler (ContextHandler.java:doStop(1007)) - Stopped o.e.j.s.ServletContextHandler@f2d890c{static,/static,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/,UNAVAILABLE}
2020-10-23 22:41:18,371 [Listener at localhost/45103] INFO  handler.ContextHandler (ContextHandler.java:doStop(1007)) - Stopped o.e.j.s.ServletContextHandler@61149fa5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2408/src/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE}

@goiri goiri changed the title HDFS-15643. TestFileChecksumCompositeCrc is flaky HDFS-15643. TestFileChecksumCompositeCrc fails intermittently. Oct 27, 2020
DataNode dnToDie = cluster.getDataNodes().get(dnIdxToDie);
shutdownDataNode(dnToDie);
// wait enough time for the locations to be updated.
Thread.sleep(STALE_INTERVAL);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we waitFor instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree with you @goiri .
I experimented with wait for number of live replicas: this did not work. It stayed 8 and did not go back to 9.
Do you have suggestion what conditions should we be waiting for?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not very close to this part of the code but there must be ways to force the statistics to update.
Not sure who can help with this part of the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. the problem is that I cannot reproduce it on my local machine. However, it seems that it fails in a consistent way on Yetus.
If it is not a real bug, I wonder if volumeScanner could be a factor in randomly slowing down the DNs. I see many log message from the volume scanner when I run locally.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there is truncation there too though.
We may want to make the test a little less verbose.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @goiri
Those are the logs I was looking at. All logs in TestFileChecksum and TestFileChecksumCompositeCrc truncate the last 9 seconds prior to the failure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could reproduce the failure locally:

$ ./start-build-env.sh
$ mvn clean install -DskipTests -Pnative
$ cd hadoop-hdfs-project/hadoop-hdfs
$ mvn test -Pnative -Pparallel-tests

Attached the stdout in the JIRA: https://issues.apache.org/jira/secure/attachment/13014321/org.apache.hadoop.hdfs.TestFileChecksum-output.txt

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could reproduce even without -Pparallel-tests

$ pwd
/home/aajisaka/hadoop/hadoop-hdfs-project/hadoop-hdfs
$ mvn test -Dtest=TestFileChecksum -Pnative

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 2s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 0m 0s test4tests The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 32m 47s trunk passed
+1 💚 compile 1m 19s trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 compile 1m 11s trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+1 💚 checkstyle 0m 46s trunk passed
+1 💚 mvnsite 1m 21s trunk passed
+1 💚 shadedclient 18m 54s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 52s trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 javadoc 1m 22s trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+0 🆗 spotbugs 3m 10s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 3m 8s trunk passed
_ Patch Compile Tests _
+1 💚 mvninstall 1m 11s the patch passed
+1 💚 compile 1m 16s the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 javac 1m 16s the patch passed
+1 💚 compile 1m 6s the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+1 💚 javac 1m 6s the patch passed
+1 💚 checkstyle 0m 40s the patch passed
+1 💚 mvnsite 1m 13s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 shadedclient 16m 14s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 0m 48s the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
+1 💚 javadoc 1m 20s the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
+1 💚 findbugs 3m 18s the patch passed
_ Other Tests _
-1 ❌ unit 114m 44s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 asflicense 0m 37s The patch does not generate ASF License warnings.
206m 56s
Reason Tests
Failed junit tests hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache
hadoop.hdfs.TestMultipleNNPortQOP
Subsystem Report/Notes
Docker ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/2/artifact/out/Dockerfile
GITHUB PR #2408
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux 1055ba63ccdd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / ae74407
Default Java Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/2/testReport/
Max. process+thread count 2711 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/2/console
versions git=2.17.1 maven=3.6.0 findbugs=4.1.3
Powered by Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@amahussein
Copy link
Contributor Author

That's trickier than what I thought. Need another iteration.

@amahussein
Copy link
Contributor Author

@goiri and @aajisaka , do you guys know if it is possible to see the full logs of the unit test?
The Yetus console and test reports show truncated logs. So, I cannot see the sequence of events that leads to the Exception and the stack trace. I cannot reproduce the failure locally too :/

...[truncated 896674 chars]...
sed]. Total timeout mills is 480000, 479813 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:351)

@aajisaka
Copy link
Member

do you guys know if it is possible to see the full logs of the unit test?

I think the keepLongStdio option can be used https://www.jenkins.io/doc/pipeline/steps/junit/
The option can be enabled by updating the ./Jenkinsfile as follows:

-                junit "${env.SOURCEDIR}/**/target/surefire-reports/*.xml"
+                junit keepLongStdio: true, testResults: "${env.SOURCEDIR}/**/target/surefire-reports/*.xml"

@amahussein
Copy link
Contributor Author

I think the keepLongStdio option can be used https://www.jenkins.io/doc/pipeline/steps/junit/
The option can be enabled by updating the ./Jenkinsfile as follows:

-                junit "${env.SOURCEDIR}/**/target/surefire-reports/*.xml"
+                junit keepLongStdio: true, testResults: "${env.SOURCEDIR}/**/target/surefire-reports/*.xml"

Thanks @aajisaka !
I am going to try it out.

@amahussein
Copy link
Contributor Author

Closing this. See the fix of the EC buffer in #2424

@amahussein amahussein closed this Oct 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants