Skip to content

Conversation

steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Feb 29, 2024

This is consistent with the java value.

I had thought of cutting all the fs.s3a settings from core-default, but think we maybe need to review our public docs before doing that. having to look at Constants.java shouldn't be the default way to learn about an option.

How was this patch tested?

commented out my timeout from my auth-keys file (so it wasn't stamping on this default) and running the tests. The used ripgrep to look for the "is too low" message; only found in test cases where we explicitly created the problem.

2:2024-02-29 10:48:03,153 [JUnit-testMinimumDurationWins] WARN  impl.ConfigurationHelper (LogExactlyOnce.java:warn(39)) - Option fs.s3a.connection.acquisition.timeout is too low (1,000 ms). Setting to 15,000 ms instead
3:2024-02-29 10:48:03,153 [JUnit-testMinimumDurationWins] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:enforceMinimumDuration(127)) - Option fs.s3a.connection.acquisition.timeout is too low (1,000 ms). Setting to 15,000 ms instead
6:2024-02-29 10:48:03,153 [JUnit-testMinimumDurationWins] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:enforceMinimumDuration(127)) - Option fs.s3a.connection.establish.timeout is too low (1,000 ms). Setting to 15,000 ms instead
9:2024-02-29 10:48:03,154 [JUnit-testMinimumDurationWins] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:enforceMinimumDuration(127)) - Option fs.s3a.connection.timeout is too low (1,000 ms). Setting to 15,000 ms instead
22:2024-02-29 10:48:03,310 [JUnit-testEnforceMinDuration] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:enforceMinimumDuration(127)) - Option key is too low (1,000 ms). Setting to 10,000 ms instead

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

This is consistent with the java value.

Change-Id: Ib24f4057f778206d6d59230de02037c5ada209f4
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 19s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 💚 mvninstall 31m 48s trunk passed
+1 💚 compile 8m 52s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 compile 8m 8s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 mvnsite 0m 54s trunk passed
+1 💚 javadoc 0m 50s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 33s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 shadedclient 70m 50s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 30s the patch passed
+1 💚 compile 8m 27s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javac 8m 27s the patch passed
+1 💚 compile 8m 5s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 javac 8m 5s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 0m 52s the patch passed
+1 💚 javadoc 0m 43s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 35s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 shadedclient 22m 53s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 21s hadoop-common in the patch passed.
+1 💚 asflicense 0m 40s The patch does not generate ASF License warnings.
130m 58s
Subsystem Report/Notes
Docker ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6601/1/artifact/out/Dockerfile
GITHUB PR #6601
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint
uname Linux d18f3c0f77bc 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 02ca7f4
Default Java Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6601/1/testReport/
Max. process+thread count 2791 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6601/1/console
versions git=2.25.1 maven=3.6.3
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor Author

@HarshitGupta11 @mukund-thakur @ahmarsuhail reviews, please? targeting 3.4.1

@virajjasani
Copy link
Contributor

Change looks good, until we remove fs.s3a settings from core-default, i wonder if we should do one round of comparison b/ s3a/Contants and core-default (as a followup jira)?

Copy link
Contributor

@ahmarsuhail ahmarsuhail left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM.

@steveloughran
Copy link
Contributor Author

@virajjasani I did a scan. Now, the clever thing would be to have a test suite which compared values so there is never a regression. e.g

assertDurationEqual(conf, CONNECTION_TIMEOUT, CONNECTION_TIMEOUT_DEFAULT)

@steveloughran steveloughran merged commit 095229f into apache:trunk Mar 5, 2024
asfgit pushed a commit that referenced this pull request Mar 5, 2024
)

This is consistent with the value in the hadoop-aws source code

Contributed by Steve Loughran
dongjoon-hyun added a commit to apache/spark that referenced this pull request Mar 25, 2024
…eout` to 30s if missing

### What changes were proposed in this pull request?

This PR aims to handle HADOOP-19097 from Apache Spark side. We can remove this when Apache Hadoop `3.4.1` releases.
- apache/hadoop#6601

### Why are the changes needed?

Apache Hadoop shows a warning to its default configuration. This default value issue is fixed at Apache Spark 3.4.1.
```
24/03/25 14:46:21 WARN ConfigurationHelper: Option fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 ms instead
```

This change will suppress Apache Hadoop default warning in the consistent way with the future Hadoop releases.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass the CIs.

Manually.

**BUILD**
```
$ dev/make-distribution.sh -Phadoop-cloud
```

**BEFORE**
```
scala> spark.range(10).write.mode("overwrite").orc("s3a://express-1-zone--***--x-s3/orc/")
...
24/03/25 15:50:46 WARN ConfigurationHelper: Option fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 ms instead
```

**AFTER**
```
scala> spark.range(10).write.mode("overwrite").orc("s3a://express-1-zone--***--x-s3/orc/")
...(ConfigurationHelper warning is gone)...
```

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #45710 from dongjoon-hyun/SPARK-47552.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
sweisdb pushed a commit to sweisdb/spark that referenced this pull request Apr 1, 2024
…eout` to 30s if missing

### What changes were proposed in this pull request?

This PR aims to handle HADOOP-19097 from Apache Spark side. We can remove this when Apache Hadoop `3.4.1` releases.
- apache/hadoop#6601

### Why are the changes needed?

Apache Hadoop shows a warning to its default configuration. This default value issue is fixed at Apache Spark 3.4.1.
```
24/03/25 14:46:21 WARN ConfigurationHelper: Option fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 ms instead
```

This change will suppress Apache Hadoop default warning in the consistent way with the future Hadoop releases.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass the CIs.

Manually.

**BUILD**
```
$ dev/make-distribution.sh -Phadoop-cloud
```

**BEFORE**
```
scala> spark.range(10).write.mode("overwrite").orc("s3a://express-1-zone--***--x-s3/orc/")
...
24/03/25 15:50:46 WARN ConfigurationHelper: Option fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 ms instead
```

**AFTER**
```
scala> spark.range(10).write.mode("overwrite").orc("s3a://express-1-zone--***--x-s3/orc/")
...(ConfigurationHelper warning is gone)...
```

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#45710 from dongjoon-hyun/SPARK-47552.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants