Skip to content

Conversation

@bgaborg
Copy link

@bgaborg bgaborg commented Jun 24, 2019

…MetadataStore interface

@bgaborg
Copy link
Author

bgaborg commented Jun 24, 2019

Tests run against ireland. Got a few errors in the sequential tests:

[ERROR] Failures:
[ERROR]   ITestS3AContractRootDir.testListEmptyRootDirectory:63->AbstractContractRootDirectoryTest.testListEmptyRootDirectory:192->Assert.assertFalse:64->Assert.assertTrue:41->Assert.fail:88 listFiles(/, true).hasNext
[ERROR]   ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:222->Assert.assertTrue:41->Assert.fail:88 files mismatch: between
  "s3a://gabota-versioned-bucket-ireland/file.txt"
] and
  "s3a://gabota-versioned-bucket-ireland/file.txt"
  "s3a://gabota-versioned-bucket-ireland/fork-0003/test/testSelectEmptyFile"
  "s3a://gabota-versioned-bucket-ireland/fork-0003/test/testSelectEmptyFileWithConditions"
]
[ERROR]   ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmNonEmptyRootDirNonRecursive:132->Assert.fail:88 non recursive delete should have raised an exception, but completed with exit code true
[ERROR]   ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmRootRecursive:157->AbstractFSContractTestBase.assertPathDoesNotExist:305->Assert.fail:88 expected file to be deleted: unexpectedly found /testRmRootRecursive as  S3AFileStatus{path=s3a://gabota-versioned-bucket-ireland/testRmRootRecursive; isDirectory=false; length=0; replication=1; blocksize=33554432; modification_time=1561403043000; access_time=0; owner=gaborbota; group=gaborbota; permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE eTag=d41d8cd98f00b204e9800998ecf8427e versionId=w4._phxXR86E6rKJFTQ4VunZaDEo1wbU
[ERROR]   ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testSimpleRootListing:207->Assert.assertEquals:631->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88 expected:<1> but was:<2>

I don't think that those I related, and will go away if I clear the bucket and use a fresh ddb table. I'll check tomorrow on what can be the cause on this.

@bgaborg
Copy link
Author

bgaborg commented Jun 24, 2019

Yes, org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir is failing even without my patch on trunk! It would worth looking into it.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
0 reexec 507 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 9 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1156 trunk passed
+1 compile 34 trunk passed
+1 checkstyle 20 trunk passed
+1 mvnsite 37 trunk passed
+1 shadedclient 711 branch has no errors when building and testing our client artifacts.
+1 javadoc 23 trunk passed
0 spotbugs 61 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 58 trunk passed
_ Patch Compile Tests _
+1 mvninstall 35 the patch passed
+1 compile 28 the patch passed
+1 javac 28 the patch passed
-0 checkstyle 17 hadoop-tools/hadoop-aws: The patch generated 23 new + 53 unchanged - 2 fixed = 76 total (was 55)
+1 mvnsite 34 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 715 patch has no errors when building and testing our client artifacts.
+1 javadoc 19 the patch passed
+1 findbugs 61 the patch passed
_ Other Tests _
+1 unit 278 hadoop-aws in the patch passed.
+1 asflicense 31 The patch does not generate ASF License warnings.
3857
Subsystem Report/Notes
Docker Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/artifact/out/Dockerfile
GITHUB PR #1009
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux a02e6fd900d2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 129576f
Default Java 1.8.0_212
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/testReport/
Max. process+thread count 416 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line width

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*I've made the FileSystem TTL Provider one of the attributes you can get from the FS via a StoreContext

bindToOwnerFilesystem() is picking this up, looks like all that is needed is to delete the timeProvider = new S3Guard.TtlTimeProvider(conf); line for the owner binding to be picked up

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline with Gabor. Outcome of that conversation: bindToOwnerFileSystem doesn't exist everywhere and there isn't already a context created outside of the context (ha!) of certain operations. But we should have a context created earlier since it doesn't contain state that changes between operations (I actually wonder why we're creating a new instance for every operation instead of the metadatastore getting a permanent context). We need to check the context is complete enough, as this is called during FS initialization, precisely when the createStoreContext() javadoc warns you to be careful :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should avoid that for now, and use it during another refactor maybe. I've fixed all the other stuff with my latest commit. Excluding this would change the whole point of this jira/pr

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Just be aware, I consider it transient.

@mackrorysd -as to why the context changes: it includes the metastore, so once that is created things changed. But yes, everything in init is brittle; including stuff like setting up delegation tokens (needed before you make any S3 calls)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See discussion in DynamoDBMetadataStore.initialize() about why we don't need an explicit entry here

@steveloughran
Copy link
Contributor

Review summary

  • IDE converted methods to single lines; they need to be restored to multiline entries < 80 char width.
  • the initialize(FileSystem, ITtlProvider) call could be replaced with just FileSystem given that we only support S3A FS for DDB, and it has a way to get that TTL provider, a way which is there, just being overwritten right now.

The main arguments in favour of that explicit binding to a TTL provider are

  • pulls out the TTL provider as more important
  • nominally makes support for other filesystems easier (after all, LocalMS doesn't care, and we haven't yet changed the DynamoDB.initialize code to explicitly ask for an S3AFileSystem instance

If you look at the slow refactoring I've started in HADOOP-15183, you can see that I've been pushing stuff into StoreContext, with the ultimate goal of all subsidiary parts of the s3a codebase (metastore, delegation tokens, S3ABlockOutputStream, etc) to not get given an S3AFS instance any more, just a StoreContext with access to lower level operations through binding callbacks (WriteOperationHelper, etc). Why so: it'll line us up for actually splitting up S3AFS into layers

What does that mean now, well, not much: we can cast to S3AFS and extract the TTL, or take it as an argument as this patch does. I don't see it being that significant either way, except ultimately, I do hope to replace that initialize(FileSystem) with an Initialize(StoreContext), and, as that serves up the TTL, it'd be the consistent way to get it

@bgaborg bgaborg force-pushed the HADOOP-16383-ttl-metadatastore-init branch from e551a7c to c6cca7e Compare July 3, 2019 11:02
@bgaborg
Copy link
Author

bgaborg commented Jul 3, 2019

rebased to trunk

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
0 reexec 32 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 9 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1093 trunk passed
+1 compile 32 trunk passed
+1 checkstyle 19 trunk passed
+1 mvnsite 36 trunk passed
+1 shadedclient 661 branch has no errors when building and testing our client artifacts.
+1 javadoc 22 trunk passed
0 spotbugs 57 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 55 trunk passed
_ Patch Compile Tests _
+1 mvninstall 31 the patch passed
+1 compile 27 the patch passed
+1 javac 27 the patch passed
-0 checkstyle 16 hadoop-tools/hadoop-aws: The patch generated 1 new + 53 unchanged - 2 fixed = 54 total (was 55)
+1 mvnsite 31 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 693 patch has no errors when building and testing our client artifacts.
+1 javadoc 23 the patch passed
+1 findbugs 62 the patch passed
_ Other Tests _
+1 unit 276 hadoop-aws in the patch passed.
+1 asflicense 25 The patch does not generate ASF License warnings.
3204
Subsystem Report/Notes
Docker Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/2/artifact/out/Dockerfile
GITHUB PR #1009
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux f7c7d04e7b22 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 15d82fc
Default Java 1.8.0_212
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/2/testReport/
Max. process+thread count 452 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/2/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
0 reexec 31 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 9 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1027 trunk passed
+1 compile 31 trunk passed
+1 checkstyle 20 trunk passed
+1 mvnsite 35 trunk passed
+1 shadedclient 659 branch has no errors when building and testing our client artifacts.
+1 javadoc 23 trunk passed
0 spotbugs 52 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 49 trunk passed
_ Patch Compile Tests _
+1 mvninstall 28 the patch passed
+1 compile 30 the patch passed
+1 javac 30 the patch passed
-0 checkstyle 16 hadoop-tools/hadoop-aws: The patch generated 1 new + 53 unchanged - 2 fixed = 54 total (was 55)
+1 mvnsite 35 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 701 patch has no errors when building and testing our client artifacts.
+1 javadoc 24 the patch passed
+1 findbugs 62 the patch passed
_ Other Tests _
+1 unit 276 hadoop-aws in the patch passed.
+1 asflicense 32 The patch does not generate ASF License warnings.
3161
Subsystem Report/Notes
Docker Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/3/artifact/out/Dockerfile
GITHUB PR #1009
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux 14588643f78a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 15d82fc
Default Java 1.8.0_212
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/3/testReport/
Max. process+thread count 437 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/3/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@bgaborg
Copy link
Author

bgaborg commented Jul 3, 2019

Test run against ireland:

[ERROR] Errors:
[ERROR]   ITestMagicCommitMRJob>AbstractITCommitMRJob.testMRJob:137->AbstractFSContractTestBase.assertIsDirectory:327 ? FileNotFound
[ERROR]   ITestDirectoryCommitMRJob>AbstractITCommitMRJob.testMRJob:137->AbstractFSContractTestBase.assertIsDirectory:327 ? FileNotFound
[ERROR]   ITestPartitionCommitMRJob>AbstractITCommitMRJob.testMRJob:137->AbstractFSContractTestBase.assertIsDirectory:327 ? FileNotFound
[ERROR]   ITestStagingCommitMRJob>AbstractITCommitMRJob.testMRJob:137->AbstractFSContractTestBase.assertIsDirectory:327 ? FileNotFound
[ERROR]   ITestS3GuardConcurrentOps.testConcurrentTableCreations:168->deleteTable:77 ? IllegalArgument

AbstractITCommitMRJob.testMRJob is known, stacktrace for deletetable:

[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 260.492 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
[ERROR] testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)  Time elapsed: 260.363 s  <<< ERROR!
java.lang.IllegalArgumentException: Table s3guard.test.testConcurrentTableCreations1904337035 is not deleted.
	at com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:505)
	at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.deleteTable(ITestS3GuardConcurrentOps.java:77)
	at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.waiters.WaiterTimedOutException: Reached maximum attempts without transitioning to the desired state
	at com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:86)
	at com.amazonaws.waiters.WaiterImpl.run(WaiterImpl.java:88)
	at com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:502)
	... 16 more

@bgaborg bgaborg force-pushed the HADOOP-16383-ttl-metadatastore-init branch from c6cca7e to b7bbdb6 Compare July 15, 2019 16:27
…MetadataStore interface

Fix things based on review and checkstyle issues
@bgaborg bgaborg force-pushed the HADOOP-16383-ttl-metadatastore-init branch from b7bbdb6 to 32e51b0 Compare July 17, 2019 09:15
@bgaborg
Copy link
Author

bgaborg commented Jul 17, 2019

Tested against ireland with dynamo: just the known testMRJob errors.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
0 reexec 40 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 9 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1183 trunk passed
+1 compile 32 trunk passed
+1 checkstyle 22 trunk passed
+1 mvnsite 38 trunk passed
+1 shadedclient 719 branch has no errors when building and testing our client artifacts.
+1 javadoc 26 trunk passed
0 spotbugs 67 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 65 trunk passed
_ Patch Compile Tests _
+1 mvninstall 31 the patch passed
+1 compile 28 the patch passed
+1 javac 28 the patch passed
-0 checkstyle 16 hadoop-tools/hadoop-aws: The patch generated 1 new + 58 unchanged - 2 fixed = 59 total (was 60)
+1 mvnsite 35 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 743 patch has no errors when building and testing our client artifacts.
+1 javadoc 21 the patch passed
+1 findbugs 69 the patch passed
_ Other Tests _
+1 unit 281 hadoop-aws in the patch passed.
+1 asflicense 23 The patch does not generate ASF License warnings.
3467
Subsystem Report/Notes
Docker Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/5/artifact/out/Dockerfile
GITHUB PR #1009
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux 3d2d9839a998 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 85d9111
Default Java 1.8.0_212
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/5/testReport/
Max. process+thread count 411 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/5/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
0 reexec 34 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 9 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1048 trunk passed
+1 compile 37 trunk passed
+1 checkstyle 27 trunk passed
+1 mvnsite 41 trunk passed
+1 shadedclient 738 branch has no errors when building and testing our client artifacts.
+1 javadoc 24 trunk passed
0 spotbugs 64 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 63 trunk passed
_ Patch Compile Tests _
+1 mvninstall 33 the patch passed
+1 compile 31 the patch passed
+1 javac 31 the patch passed
+1 checkstyle 19 hadoop-tools/hadoop-aws: The patch generated 0 new + 58 unchanged - 2 fixed = 58 total (was 60)
+1 mvnsite 59 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 725 patch has no errors when building and testing our client artifacts.
+1 javadoc 25 the patch passed
+1 findbugs 63 the patch passed
_ Other Tests _
+1 unit 275 hadoop-aws in the patch passed.
+1 asflicense 32 The patch does not generate ASF License warnings.
3362
Subsystem Report/Notes
Docker Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/6/artifact/out/Dockerfile
GITHUB PR #1009
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux c9b3de764d7b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / ee3115f
Default Java 1.8.0_212
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/6/testReport/
Max. process+thread count 442 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/6/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor

LGTM. I'm =0 on using ttlTp as the parameter name as it's not that intuitive, timeSource would be more descriptive.

But I'm not that concerned

+1

@bgaborg
Copy link
Author

bgaborg commented Jul 17, 2019

Found an unrealted test error: org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal#testInitNegativeRead is failing:

[ERROR] Tests run: 30, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 98.684 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal
[ERROR] testInitNegativeRead(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  Time elapsed: 1.679 s  <<< ERROR!
java.lang.IllegalArgumentException: bucket
	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
	at org.apache.hadoop.fs.s3a.S3AUtils.propagateBucketOptions(S3AUtils.java:1134)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Init.run(S3GuardTool.java:487)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672)
	at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:137)
	at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase$1.call(AbstractS3GuardToolTestBase.java:154)
	at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase$1.call(AbstractS3GuardToolTestBase.java:151)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:498)
	at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
	at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.runToFailure(AbstractS3GuardToolTestBase.java:150)

Given that the failure is unrelated (it fails on trunk as well) I will create a new jira for it and this can go in.

@bgaborg bgaborg merged commit c58e11b into apache:trunk Jul 17, 2019
@bgaborg
Copy link
Author

bgaborg commented Jul 17, 2019

Created https://issues.apache.org/jira/browse/HADOOP-16436 for the unrelated test failure.

smengcl pushed a commit to smengcl/hadoop that referenced this pull request Oct 8, 2019
…MetadataStore interface. Contributed by Gabor Bota. (apache#1009)

(cherry picked from commit c58e11b)
Change-Id: I8e2589c539c635f36e128029e8d5ffdcbdbc2994
shanthoosh pushed a commit to shanthoosh/hadoop that referenced this pull request Oct 15, 2019
amahussein pushed a commit to amahussein/hadoop that referenced this pull request Oct 29, 2019
…MetadataStore interface. Contributed by Gabor Bota. (apache#1009)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants