Skip to content

test: add marker for running tests in subprocess #3383

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 21, 2022

Conversation

P403n1x87
Copy link
Contributor

@P403n1x87 P403n1x87 commented Mar 7, 2022

The subprocess pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.

Checklist

  • Added to the correct milestone.
  • Tests provided or description of manual testing performed is included in the code or PR.
  • Library documentation is updated.
  • Corp site documentation is updated (link to the PR).

@P403n1x87 P403n1x87 added the changelog/no-changelog A changelog entry is not required for this PR. label Mar 7, 2022
@brettlangdon
Copy link
Member

we'd probably want a ddtrace-run variant as well

ability to set/override env variables would be useful

@P403n1x87 P403n1x87 force-pushed the tests/run-marker branch 2 times, most recently from a202ce0 to e1cc496 Compare March 8, 2022 14:45
@P403n1x87 P403n1x87 marked this pull request as ready for review March 8, 2022 14:51
@P403n1x87 P403n1x87 requested a review from a team as a code owner March 8, 2022 14:51
@P403n1x87 P403n1x87 changed the title test: add run marker for running tests in subprocess test: add marker for running tests in subprocess Mar 8, 2022
Copy link
Member

@brettlangdon brettlangdon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you have an example of a failure? what they look like?

other than figuring out where we can put some usage docs for this, this is cool.

@P403n1x87
Copy link
Contributor Author

do you have an example of a failure? what they look like?

=================================================================================================================== FAILURES ====================================================================================================================
_________________________________________________________________________________________________________ test_writer_env_configuration _________________________________________________________________________________________________________

item = <Function test_writer_env_configuration>, params = None

    def run_function_from_file(item, params=None):
        file, _, func = item.location
        marker = item.get_closest_marker("subprocess")
    
        args = marker.kwargs.get("args", [])
        file_index = 1
        args.insert(0, None)
        args.insert(0, sys.executable)
        if marker.kwargs.get("ddtrace_run", False):
            file_index += 1
            args.insert(0, "ddtrace-run")
    
        env = os.environ.copy()
        env.update(marker.kwargs.get("env", {}))
        if params is not None:
            env.update(params)
    
        expected_status = marker.kwargs.get("status", 0)
        expected_out = marker.kwargs.get("out", "").encode("utf-8")
        expected_err = marker.kwargs.get("err", "").encode("utf-8")
    
        with open(file) as f:
            t = ast.parse(f.read())
            for node in t.body:
                if isinstance(node, ast.FunctionDef) and node.name == func:
                    t.body = node.body
                    break
    
        with NamedTemporaryFile(mode="wb", suffix=".pyc") as fp:
            dump_code_to_file(compile(t, file, "exec"), fp.file)
    
            start = time.time()
            args[file_index] = fp.name
            out, err, status, _ = call_program(*args, env=env)
            end = time.time()
            excinfo = None
    
            if status != expected_status:
                excinfo = AssertionError(
                    "Expected status %s, got %s.\n=== Captured STDERR ===\n%s=== End of captured STDERR ==="
                    % (expected_status, status, err.decode("utf-8"))
                )
            elif out != expected_out:
                excinfo = AssertionError("Expected out %s, got %s" % (expected_out, out))
            elif err != expected_err:
                excinfo = AssertionError("Expected err %s, got %s" % (expected_err, err))
    
            if PY2 and excinfo is not None:
                try:
>                   raise excinfo
E                   AssertionError: Expected status 0, got 1.
E                   === Captured STDERR ===
E                   Traceback (most recent call last):
E                     File "tests/integration/test_integration.py", line 666, in <module>
E                       assert ddtrace.tracer._writer._interval == 5.0
E                   AssertionError
E                   === End of captured STDERR ===

tests/conftest.py:175: AssertionError

@P403n1x87 P403n1x87 force-pushed the tests/run-marker branch 2 times, most recently from 403dddd to c24b15b Compare March 9, 2022 11:49
@P403n1x87 P403n1x87 force-pushed the tests/run-marker branch 2 times, most recently from 0c89660 to 8dfa5dc Compare March 9, 2022 12:20
Copy link
Member

@brettlangdon brettlangdon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, just a few open questions.

we can handle at any point as a follow-up, but I'd like to see the "provide a function, execute in a subprocess and get out/err/status back" part as a helper function. definitely useful for when we need to do anything fancy with out/err outside of direct comparisons

brettlangdon
brettlangdon previously approved these changes Mar 9, 2022
Copy link
Member

@Kyle-Verhoog Kyle-Verhoog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is super cool and overall I like the idea but am a little concerned about the magic, particularly for debugging if/when something goes wrong and for new users to the library.

I'm not sure it's intuitive to a new user if/what/how the code runs in the subprocess (scope, environment used). Having the code in a string indicates that it's scoped outside the current program with a given environment.

It's really nice that we can lint/format subprocess test code but I think we'll need to # noqa quite often since there could be variable scoping and import issues.

Also, by having to parse and compile the code we distance ourselves from running code as it would run in the real world (something might run fine in this but not in reality) which was one of the primary motivations behind subprocess testing.

@P403n1x87
Copy link
Contributor Author

@Kyle-Verhoog thanks for your 👀 on this and the comments!

I'm not sure it's intuitive to a new user if/what/how the code runs in the subprocess (scope, environment used). Having the code in a string indicates that it's scoped outside the current program with a given environment.

Perhaps we can improve the documentation? I think in general each test case has some implicit assumptions about scope and environment as they ought to be independent from each other. But I agree this is taking these guarantees a step further.

It's really nice that we can lint/format subprocess test code but I think we'll need to # noqa quite often since there could be variable scoping and import issues.

IMO this actually gives an idea of the kind of issues actual code used by customers would produce. Perhaps if we find that we use a lot of noqa then the API needs some rethinking?

Also, by having to parse and compile the code we distance ourselves from running code as it would run in the real world (something might run fine in this but not in reality) which was one of the primary motivations behind subprocess testing.

I'm not sure I agree with this. In the end Python does the same thing when it processes a .py file. I believe we are effectively running the same steps the Python interpreter would.

@Kyle-Verhoog
Copy link
Member

Perhaps we can improve the documentation? I think in general each test case has some implicit assumptions about scope and environment as they ought to be independent from each other. But I agree this is taking these guarantees a step further.

True, I'm just a tad worried about pushing it further for new-comers and contributors.

IMO this actually gives an idea of the kind of issues actual code used by customers would produce. Perhaps if we find that we use a lot of noqa then the API needs some rethinking?

Yeah I think my concern was around shadowing of variables and such which would be detected as linting issues but don't apply because of the code executing in isolation in another process.

I'm not sure I agree with this. In the end Python does the same thing when it processes a .py file. I believe we are effectively running the same steps the Python interpreter would.

Yeah on second look the parsing and compiling isn't too complex and given how the function code is isolated I think I'm convinced that this isn't an issue 👍

Copy link
Member

@Kyle-Verhoog Kyle-Verhoog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just my two comments left to address, otherwise lgtm!!

The run pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.
@mergify mergify bot merged commit c437ad2 into DataDog:1.x Mar 21, 2022
@majorgreys
Copy link
Contributor

@Mergifyio backport 1.0

mergify bot pushed a commit that referenced this pull request Apr 1, 2022
The `subprocess` pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.

## Checklist
- [ ] Added to the correct milestone.
- [ ] Tests provided or description of manual testing performed is included in the code or PR.
- [ ] Library documentation is updated.
- [ ] [Corp site](https://github.com/DataDog/documentation/) documentation is updated (link to the PR).

(cherry picked from commit c437ad2)
@mergify
Copy link
Contributor

mergify bot commented Apr 1, 2022

backport 1.0

✅ Backports have been created

Kyle-Verhoog pushed a commit that referenced this pull request Apr 1, 2022
The `subprocess` pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.

## Checklist
- [ ] Added to the correct milestone.
- [ ] Tests provided or description of manual testing performed is included in the code or PR.
- [ ] Library documentation is updated.
- [ ] [Corp site](https://github.com/DataDog/documentation/) documentation is updated (link to the PR).

(cherry picked from commit c437ad2)

Co-authored-by: Gabriele N. Tornetta <[email protected]>
@Kyle-Verhoog
Copy link
Member

@Mergifyio backport 0.x

mergify bot pushed a commit that referenced this pull request Jul 7, 2022
The `subprocess` pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.

## Checklist
- [ ] Added to the correct milestone.
- [ ] Tests provided or description of manual testing performed is included in the code or PR.
- [ ] Library documentation is updated.
- [ ] [Corp site](https://github.com/DataDog/documentation/) documentation is updated (link to the PR).

(cherry picked from commit c437ad2)

# Conflicts:
#	tests/contrib/httpx/test_httpx.py
#	tests/integration/test_integration.py
#	tests/tracer/test_encoders.py
@mergify
Copy link
Contributor

mergify bot commented Jul 7, 2022

backport 0.x

✅ Backports have been created

Kyle-Verhoog added a commit that referenced this pull request Jul 7, 2022
)

* test: add marker for running tests in subprocess (#3383)

The `subprocess` pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.

Co-authored-by: Gabriele N. Tornetta <[email protected]>
Co-authored-by: Kyle Verhoog <[email protected]>
@mabdinur
Copy link
Contributor

mabdinur commented Oct 27, 2022

@Mergifyio backport 0.61

mergify bot pushed a commit that referenced this pull request Oct 27, 2022
The `subprocess` pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.

## Checklist
- [ ] Added to the correct milestone.
- [ ] Tests provided or description of manual testing performed is included in the code or PR.
- [ ] Library documentation is updated.
- [ ] [Corp site](https://github.com/DataDog/documentation/) documentation is updated (link to the PR).

(cherry picked from commit c437ad2)

# Conflicts:
#	tests/contrib/httpx/test_httpx.py
#	tests/integration/test_integration.py
#	tests/tracer/test_encoders.py
@mergify
Copy link
Contributor

mergify bot commented Oct 27, 2022

backport 0.61

✅ Backports have been created

1 similar comment
@mergify
Copy link
Contributor

mergify bot commented Oct 27, 2022

backport 0.61

✅ Backports have been created

Kyle-Verhoog pushed a commit that referenced this pull request Oct 31, 2022
)

* test: add marker for running tests in subprocess (#3383)

The `subprocess` pytest marker can be used to run arbitrary Python code in a Python
subprocess. This is meant to replace the existing fixture to allow actual
Python code to be written and tested, instead of using literal strings.

## Checklist
- [ ] Added to the correct milestone.
- [ ] Tests provided or description of manual testing performed is included in the code or PR.
- [ ] Library documentation is updated.
- [ ] [Corp site](https://github.com/DataDog/documentation/) documentation is updated (link to the PR).

(cherry picked from commit c437ad2)

# Conflicts:
#	tests/contrib/httpx/test_httpx.py
#	tests/integration/test_integration.py
#	tests/tracer/test_encoders.py

* fix conflicts

* Use pytest marker to generate snapshot, set variants

pull/4418

Co-authored-by: Gabriele N. Tornetta <[email protected]>
Co-authored-by: Munir Abdinur <[email protected]>
Co-authored-by: Munir Abdinur <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog A changelog entry is not required for this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants