-
Notifications
You must be signed in to change notification settings - Fork 2.4k
feat: add end to end tests for features in SAM #1557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it still needs a bit of work on dealing with failed test executions, how we are going to bubble up that exception so the user knows what to look for.
outputs[output.get("OutputKey")] = output.get("OutputValue") | ||
return outputs | ||
|
||
def delete_stack(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raise an exception if delete fails
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cloudformation doesn't throw an exception for delete stack. https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cloudformation.html#CloudFormation.Client.delete_stack
I will try to add additional check to throw exception when delete fails
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make the end-to-end-tests/test
instead of end-to-end-tests/tst
? I would prefer not to abbreviate, even on folder names.
from enum import Enum | ||
|
||
# contains all the the cloudformation stack resource status | ||
class ResourceStatus(Enum): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the value of this being an Enum here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To state that the ResourceStatus is a constant so I used Enum
What Python version are we running these tests in? Can we make it so they only run in Python3? It will make our lives much much easier to not port more code into Python3 after Python2 finally dies. |
Initially I was running the tests in Python2. I have updated the code to run tests in Python3 |
Properties: | ||
Handler: index.handler | ||
Runtime: nodejs12.x | ||
InlineCode: | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should one of these tests not use inline so we can see how that is setup? I would assume that InlineCode is not going to be the majority of things we do with these tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! I will update one of the tests without InlineCode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the template.yaml
for http_api_cors
integration test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to be very careful about exceptions. We should not catch broad ones (if ever). So things like except Exception
need to be scoped. This can lead to hiding actually errors which will give the tests a false sense of security.
We should avoid raising ValueError or other generic exceptions as well. If python ends up throwing one of this, we think it is one error but it is a completely different one. This can be very dangerous for integration tests that we rely on for deployments.
from samtranslator.model.exceptions import InvalidTemplateException | ||
|
||
|
||
class BaseTest(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should directly import instead: from unittest import TestCase
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. I will update in the next revision
waiter.wait(StackName=self.stack_name) | ||
return stack_id | ||
except Exception as e: | ||
raise Exception("Unable to create stack for template {}. Exception: ".format(template_path, e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why re-raise a generic Exception here? This will hide any specific exception we get from CloudFormation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed the generic exception here and not catching any other exception as the error from CloudFormation would be self explanatory
) | ||
return stack_resources | ||
except Exception as e: | ||
raise Exception("Unable to list stack resources {}".format(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same. This will hide any useful information within the message of the exception.
Ideally, you should never catch such a broad exception unless you really really need to.
if len(stack_resource) == 1: | ||
return stack_resource[0] | ||
elif len(stack_resource) == 0: | ||
raise ValueError("No resources found with resource type {}".format(resource_type.value)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Create more specific errors for these. ValueError can be raise for a bunch of different cases. Which could mean that the caller thinks its one issue but its actually completely different. For example, maybe the caller will automatically try getting the resource by logical id if the MultipleResourceFoundError was raised. But right now, he/she would have to inspect the actually error message instead of except MultipleResourceFoundError: do_x()
# there is only one resource per logical id | ||
return stack_resource[0] | ||
except Exception: | ||
raise ValueError("There is no resource in the stack with given logical id: {}".format(resource_logical_id)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specific errors to raise.
# out this suffix here. Tests can continue to access and verify the resource using the LogicalId | ||
# specified in the template | ||
if stack_resource.get("LogicalResourceId") is None: | ||
raise ValueError("LogicalResourceId should not be empty for stack resource {}".format(stack_resource)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
specific errors
from .helpers.base_test import BaseTest | ||
from .helpers.resources import Resources | ||
from .helpers.resource_types import ResourceTypes | ||
import urllib3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason we choice urllib3 over requests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No specific reason. I will update the code to use requests
module
[pytest] | ||
# Fail if coverage falls below 95% | ||
# NOTE: If debug breakpoints aren't working, comment out the code coverage line below | ||
addopts = --cov samtranslator --cov-report term-missing --cov-fail-under 95 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the coverage in all the commands we need it to be on the makefile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, right now we have only make pr
which needs coverage and --cov is present in the Makefile already for make pr
. Currently, we don't have 95% test coverage for integ tests. Removing these lines was necessary for make e2e-tests
to pass
Thank you for the suggestion, it was really useful. Removed all the generic Exceptions and updated with user defined exceptions where ever applicable. |
Closing this as it is outdated and we have been solving in another way: #1797 is the initially pr (among others). |
Issue #, if available:
Description of changes:
Added basic setup for being able to write end to end tests for SAM
Description of how you validated changes:
Ran the end to end tests using
make e2e-tests
Checklist:
make pr
passesexamples/2016-10-31
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.