Skip to content

Check EKS guide for registry validity #8855

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mrsimonemms opened this issue Mar 17, 2022 · 1 comment
Closed

Check EKS guide for registry validity #8855

mrsimonemms opened this issue Mar 17, 2022 · 1 comment
Labels
meta: stale This issue/PR is stale and will be closed soon team: delivery Issue belongs to the self-hosted team

Comments

@mrsimonemms
Copy link
Contributor

Bug description

The evidence for this is entirely anecdotal. This is more an investigation ticket as opposed to something with a clearly defined outcome

Request createWorkspace failed with message: 13 INTERNAL: cannot resolve workspace image: hostname required
Unknown Error: { "code": -32603 }

In Discord, we have seen a steady increase in the number of people reporting the above error message when using EKS. This issue usually is a result of a connectivity failure between the workspace and the container registry - most likely either incorrect credentials, a TLS cert error.

AWS container registry deviates from the Docker v2 API spec - if the repo doesn't exist, the spec says to create the repo which AWS does not do - so we use the internal registry with S3 storage behind it.

When I ask users to comment out containerRegistry.s3storage, the threads are usually abandoned implying that this has solved the issue.

containerRegistry:
  inCluster: true
  s3storage:
    bucket: bucket-name
    certificate:
      kind: secret
      name: object-storage-gitpod-token

Steps to reproduce

This task is to investigate the following:

  1. Is the guide providing the correct secrets?
  2. Is the S3 storage in the Installer correctly implemented?

If either of those are yes, the task then becomes the work to fix these issues.

If not, it's likely a question of documentation in the EKS guide - perhaps we're not highlighting the purpose of these credentials and they're providing the wrong credentials/wrong IAMs. This process SHOULD be automated, but it may be that there's an undocumented step that's already taken place in the Gitpod AWS account that we need to communicate to users.

Workspace affected

No response

Expected behavior

No response

Example repository

No response

Anything else?

No response

@stale
Copy link

stale bot commented Jun 19, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the meta: stale This issue/PR is stale and will be closed soon label Jun 19, 2022
@stale stale bot closed this as completed Jul 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
meta: stale This issue/PR is stale and will be closed soon team: delivery Issue belongs to the self-hosted team
Projects
None yet
Development

No branches or pull requests

1 participant