Skip to content

Commit 69a157f

Browse files
Add initial version of test runner (#5)
Add initial version of test runner
1 parent 72181c9 commit 69a157f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+729
-57
lines changed

.dockerignore

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
.git/
22
.github/
3-
.bin/run-in-docker.sh
4-
.bin/run-tests-in-docker.sh
3+
bin/run-in-docker.sh
4+
bin/run-tests-in-docker.sh
55
tests/
6+
tests/*/bin/
7+
tests/*/obj/
8+
tests/*/build.log

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,5 @@
11
tests/*/results.json
2+
tests/*/bin/
3+
tests/*/obj/
4+
tests/*/build.log
5+
tests/*/*.original

Dockerfile

Lines changed: 16 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,21 @@
1-
FROM alpine:3.10
1+
FROM mcr.microsoft.com/dotnet/sdk:5.0.400-alpine3.13-amd64 AS build
2+
WORKDIR /opt/test-runner
23

3-
# TODO: install packages required to run the tests
4-
# RUN apk add --no-cache jq coreutils
4+
# Pre-install packages for offline usage
5+
RUN dotnet new console --no-restore
6+
RUN dotnet add package Microsoft.NET.Test.Sdk -v 16.8.3
7+
RUN dotnet add package xunit -v 2.4.1
8+
RUN dotnet add package xunit.runner.visualstudio -v 2.4.3
9+
RUN dotnet add package Exercism.Tests -v 0.1.0-beta1
510

11+
FROM mcr.microsoft.com/dotnet/sdk:5.0.400-alpine3.13-amd64 AS runtime
612
WORKDIR /opt/test-runner
13+
14+
RUN apk add bash jq
15+
16+
ENV DOTNET_NOLOGO=true
17+
ENV DOTNET_CLI_TELEMETRY_OPTOUT=true
18+
19+
COPY --from=build /root/.nuget/packages/ /root/.nuget/packages/
720
COPY . .
821
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]

README.md

Lines changed: 2 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,6 @@
1-
# Exercism Test Runner Template
1+
# Exercism Visual Basic Test Runner
22

3-
This repository is a [template repository](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-template-repository) for creating [test runners][test-runners] for [Exercism][exercism] tracks.
4-
5-
## Using the Test Runner Template
6-
7-
1. Ensure that your track has not already implemented a test runner. If there is, there will be a `https://github.com/exercism/<track>-test-runner` repository (i.e. if your track's slug is `python`, the test runner repo would be `https://github.com/exercism/python-test-runner`)
8-
2. Follow [GitHub's documentation](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-from-a-template) for creating a repository from a template repository
9-
- Name your new repository based on your language track's slug (i.e. if your track is for Python, your test runner repo name is `python-test-runner`)
10-
3. Remove this [Exercism Test Runner Template](#exercism-test-runner-template) section from the `README.md` file
11-
4. Build the test runner, conforming to the [Test Runner interface specification](https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md).
12-
- Update the files to match your track's needs. At the very least, you'll need to update `bin/run.sh`, `Dockerfile` and the test solutions in the `tests` directory
13-
- Tip: look for `TODO:` comments to point you towards code that need updating
14-
- Tip: look for `OPTIONAL:` comments to point you towards code that _could_ be useful
15-
16-
Once you're happy with your test runner, [open an issue on the exercism/automated-tests repo](https://github.com/exercism/automated-tests/issues/new?assignees=&labels=&template=new-test-runner.md&title=%5BNew+Test+Runner%5D+) to request an official test runner repository for your track.
17-
18-
# Exercism TRACK_NAME_HERE Test Runner
19-
20-
The Docker image to automatically run tests on TRACK_NAME_HERE solutions submitted to [Exercism].
3+
The Docker image to automatically run tests on Visual Basic solutions submitted to [Exercism].
214

225
## Run the test runner
236

bin/run-in-docker.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ output_dir="${3%/}"
3030
mkdir -p "${output_dir}"
3131

3232
# Build the Docker image
33-
docker build --rm -t exercism/test-runner .
33+
docker build --rm -t exercism/vbnet-test-runner .
3434

3535
# Run the Docker image using the settings mimicking the production environment
3636
docker run \
@@ -40,4 +40,4 @@ docker run \
4040
--mount type=bind,src="${input_dir}",dst=/solution \
4141
--mount type=bind,src="${output_dir}",dst=/output \
4242
--mount type=tmpfs,dst=/tmp \
43-
exercism/test-runner "${slug}" /solution /output
43+
exercism/vbnet-test-runner "${slug}" /solution /output

bin/run-tests-in-docker.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
# ./bin/run-tests-in-docker.sh
1414

1515
# Build the Docker image
16-
docker build --rm -t exercism/test-runner .
16+
docker build --rm -t exercism/vbnet-test-runner .
1717

1818
# Run the Docker image using the settings mimicking the production environment
1919
docker run \
@@ -24,4 +24,4 @@ docker run \
2424
--mount type=tmpfs,dst=/tmp \
2525
--workdir /opt/test-runner \
2626
--entrypoint /opt/test-runner/bin/run-tests.sh \
27-
exercism/test-runner
27+
exercism/vbnet-test-runner

bin/run-tests.sh

Lines changed: 24 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -19,24 +19,40 @@ for test_dir in tests/*; do
1919
test_dir_path=$(realpath "${test_dir}")
2020
results_file_path="${test_dir_path}/results.json"
2121
expected_results_file_path="${test_dir_path}/expected_results.json"
22+
expected_results_original_file_path="${expected_results_file_path}.original"
23+
tmp_results_file_path="/tmp/results.json"
2224

2325
bin/run.sh "${test_dir_name}" "${test_dir_path}" "${test_dir_path}"
2426

25-
# OPTIONAL: Normalize the results file
26-
# If the results.json file contains information that changes between
27-
# different test runs (e.g. timing information or paths), you should normalize
28-
# the results file to allow the diff comparison below to work as expected
29-
# sed -i -E \
30-
# -e 's/Elapsed time: [0-9]+\.[0-9]+ seconds//g' \
31-
# -e "s~${test_dir_path}~/solution~g" \
32-
# "${results_file_path}"
27+
# Normalize the results file
28+
sed -i -E \
29+
-e 's/Duration: [0-9]+ ms//g' \
30+
-e 's/ \[0-9]+ ms]//g' \
31+
-e "s~${test_dir_path}~/solution~g" \
32+
"${results_file_path}"
33+
34+
# TODO: this is a temporary fix around the fact that tests are not returned in order
35+
# and the .message property can thus not be checked
36+
if [ "${test_dir_name}" == "example-all-fail" ] ||
37+
[ "${test_dir_name}" == "example-partial-fail" ]; then
38+
cp "${expected_results_file_path}" "${expected_results_original_file_path}"
39+
actual_message=$(jq -r '.message' "${results_file_path}")
40+
jq --arg m "${actual_message}" '.message = $m' "${expected_results_original_file_path}" > "${tmp_results_file_path}" && mv "${tmp_results_file_path}" "${expected_results_file_path}"
41+
fi
3342

3443
echo "${test_dir_name}: comparing results.json to expected_results.json"
3544
diff "${results_file_path}" "${expected_results_file_path}"
3645

3746
if [ $? -ne 0 ]; then
3847
exit_code=1
3948
fi
49+
50+
# TODO: this is a temporary fix around the fact that tests are not returned in order
51+
# and the .message property can thus not be checked
52+
if [ "${test_dir_name}" == "example-all-fail" ] ||
53+
[ "${test_dir_name}" == "example-partial-fail" ]; then
54+
mv "${expected_results_original_file_path}" "${expected_results_file_path}"
55+
fi
4056
done
4157

4258
exit ${exit_code}

bin/run.sh

Lines changed: 29 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
#!/usr/bin/env sh
1+
#!/usr/bin/env bash
22

33
# Synopsis:
44
# Run the test runner on a solution.
@@ -24,36 +24,48 @@ fi
2424
slug="$1"
2525
input_dir="${2%/}"
2626
output_dir="${3%/}"
27+
exercise=$(echo "${slug}" | sed -r 's/(^|-)([a-z])/\U\2/g')
28+
tests_file="${input_dir}/$(jq -r '.files.test[0]' "${input_dir}/.meta/config.json")"
29+
tests_file_original="${tests_file}.original"
2730
results_file="${output_dir}/results.json"
2831

2932
# Create the output directory if it doesn't exist
3033
mkdir -p "${output_dir}"
3134

3235
echo "${slug}: testing..."
3336

37+
cp "${tests_file}" "${tests_file_original}"
38+
39+
# Unskip tests
40+
sed -i -E 's/Skip *:= *"Remove this Skip property to run this test"//' "${tests_file}"
41+
42+
pushd "${input_dir}" > /dev/null
43+
44+
dotnet restore > /dev/null
45+
3446
# Run the tests for the provided implementation file and redirect stdout and
3547
# stderr to capture it
36-
# TODO: Replace 'RUN_TESTS_COMMAND' with the command to run the tests
37-
test_output=$(RUN_TESTS_COMMAND 2>&1)
48+
test_output=$(dotnet test --no-restore 2>&1)
49+
exit_code=$?
50+
51+
popd > /dev/null
52+
53+
# Restore the original file
54+
mv -f "${tests_file_original}" "${tests_file}"
3855

3956
# Write the results.json file based on the exit code of the command that was
4057
# just executed that tested the implementation file
41-
if [ $? -eq 0 ]; then
58+
if [ ${exit_code} -eq 0 ]; then
4259
jq -n '{version: 1, status: "pass"}' > ${results_file}
4360
else
44-
# OPTIONAL: Sanitize the output
45-
# In some cases, the test output might be overly verbose, in which case stripping
46-
# the unneeded information can be very helpful to the student
47-
# sanitized_test_output=$(printf "${test_output}" | sed -n '/Test results:/,$p')
48-
49-
# OPTIONAL: Manually add colors to the output to help scanning the output for errors
50-
# If the test output does not contain colors to help identify failing (or passing)
51-
# tests, it can be helpful to manually add colors to the output
52-
# colorized_test_output=$(echo "${test_output}" \
53-
# | GREP_COLOR='01;31' grep --color=always -E -e '^(ERROR:.*|.*failed)$|$' \
54-
# | GREP_COLOR='01;32' grep --color=always -E -e '^.*passed$|$')
55-
56-
jq -n --arg output "${test_output}" '{version: 1, status: "fail", message: $output}' > ${results_file}
61+
# Sanitize the output
62+
if grep -q "matched the specified pattern" <<< "${test_output}" ; then
63+
sanitized_test_output=$(printf "${test_output}" | sed -n -E -e '1,/matched the specified pattern.$/!p')
64+
else
65+
sanitized_test_output="${test_output}"
66+
fi
67+
68+
jq -n --arg output "${sanitized_test_output}" '{version: 1, status: "fail", message: $output}' > ${results_file}
5769
fi
5870

5971
echo "${slug}: done"
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
Public Module Leap
2+
Public Function IsLeapYear(ByVal year As Integer) As Boolean
3+
Return year Mod 400 = 0 OrElse (year Mod 100 <> 0 AndAlso year Mod 4 = 0)
4+
End Function
5+
End Module
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
{
2+
"blurb": "Given a year, report if it is a leap year.",
3+
"authors": [
4+
"ch020"
5+
],
6+
"contributors": [
7+
"axtens"
8+
],
9+
"files": {
10+
"solution": [
11+
"Leap.vb"
12+
],
13+
"test": [
14+
"LeapTests.vb"
15+
],
16+
"example": [
17+
".meta/Example.vb"
18+
]
19+
},
20+
"source": "JavaRanch Cattle Drive, exercise 3",
21+
"source_url": "http://www.javaranch.com/leap.jsp"
22+
}

0 commit comments

Comments
 (0)