-
Notifications
You must be signed in to change notification settings - Fork 2.8k
com.amazonaws.SdkClientException: Unable to load credentials from service endpoint #2521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Possibly related to other timeout-related issues (e.g. #2365), but those are using the EC2 endpoint which is different. I can't find it documented anywhere that the aws-sdk-java does/doesn't support the ECS endpoint, but it may be related to that issue |
Hi @cbcoutinho thank you for the detailed report.
It looks like ContainerCredentialsProvider is not in the default list of credential providers of |
Hi @debora-ito
Thanks for the tip, I had missed the list of default providers in the stack trace. That helped point me in the right direction
I learned that `hadoop-aws` doesn't include all available providers by default, and that it's possible to dynamically add them at runtime using some configuration properties [0].
I'm going to try declare the ECS provider at runtime and see if that solves my problem.
This issue doesn't appear to be a Java sdk problem, but more Hadoop-specific and it's choice of provider defaults. Feel free to close.
[0]
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Using_Session_Credentials_with_TemporaryAWSCredentialsProvider
Op di 9 mrt. 2021 20:09 schreef Debora N. Ito <[email protected]>:
… Hi @cbcoutinho <https://github.com/cbcoutinho> thank you for the detailed
report.
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Unable to load credentials from service endpoint
at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:159)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1166)
It looks like ContainerCredentialsProvider is not in the default list of
credential providers of org.apache.hadoop.fs.s3a.AWSCredentialProviderList.
Is that the provider that you were expecting to pick up the credentials?
I'm sorry, I'm not super familiar with ECS environments.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2521 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADCN33FUOYYVSOJI2X44KNLTCZP7PANCNFSM4Y2O5BDQ>
.
|
I'm glad I could help! |
COMMENT VISIBILITY WARNINGComments on closed issues are hard for our team to see. |
For completeness, this was the additional flag that I needed when invoking spark-jobs in my local cluster: --conf spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.ContainerCredentialsProvider |
I'm trying to set up a small Spark cluster using docker-compose, and vending my credentials to each of the containers via the ECS Task Metadata Endpoint. This is provided by another docker container using the https://github.com/awslabs/amazon-ecs-local-container-endpoints image.
Containers are able to
cURL
the endpoint (169.254.170.2/creds
), and the env vars are respected by other SDKs such as python/boto3, but I can't seem to get the spark containers to reach the endpoint. I've tried using the standard hadoop-aws jars as well as the latest 1.11.x versions of aws-sdk-java to no avail.Describe the bug
The spark containers that I'm using to query some data on S3 locally are erroring out due to missing credentials. The following spark-shell command works on EMR clusters, but I'm trying to specifically run this locally using
docker-compose
and it seems like the aws-sdk-java doesn't respect the ECS metadata endpoint.The endpoint seems to be ignored or working incorrectly for java - the python sdk (boto3) works as expected.
Expected Behavior
The credentials available using the ECS Task metadata endpoint should allow java application(s) access to the credentials
Current Behavior
After the
spark.read....
job is started, there is a considerable hang of about 10s or more before the entire process fails. I'm not sure if the problem is with the ECS endpoints or if it's related to the timeout itself. The awscli doesn't suffer from the same timeout.Steps to Reproduce
Start the various docker-containers using docker-compose, and then launch a spark-shell from within either the
spark-master
orspark-worker
containers:Possible Solution
None, yet
Context
Trying to use ECS endpoints in a docker-compose setting
Your Environment
The text was updated successfully, but these errors were encountered: