Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
.git/
.github/
.bin/run-in-docker.sh
.bin/run-tests-in-docker.sh
bin/run-in-docker.sh
bin/run-tests-in-docker.sh
tests/
tests/*/bin/
tests/*/obj/
tests/*/build.log
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,5 @@
tests/*/results.json
tests/*/bin/
tests/*/obj/
tests/*/build.log
tests/*/*.original
19 changes: 16 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,21 @@
FROM alpine:3.10
FROM mcr.microsoft.com/dotnet/sdk:5.0.400-alpine3.13-amd64 AS build
WORKDIR /opt/test-runner

# TODO: install packages required to run the tests
# RUN apk add --no-cache jq coreutils
# Pre-install packages for offline usage
RUN dotnet new console --no-restore
RUN dotnet add package Microsoft.NET.Test.Sdk -v 16.8.3
RUN dotnet add package xunit -v 2.4.1
RUN dotnet add package xunit.runner.visualstudio -v 2.4.3
RUN dotnet add package Exercism.Tests -v 0.1.0-beta1

FROM mcr.microsoft.com/dotnet/sdk:5.0.400-alpine3.13-amd64 AS runtime
WORKDIR /opt/test-runner

RUN apk add bash jq

ENV DOTNET_NOLOGO=true
ENV DOTNET_CLI_TELEMETRY_OPTOUT=true

COPY --from=build /root/.nuget/packages/ /root/.nuget/packages/
COPY . .
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]
21 changes: 2 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,6 @@
# Exercism Test Runner Template
# Exercism Visual Basic Test Runner

This repository is a [template repository](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-template-repository) for creating [test runners][test-runners] for [Exercism][exercism] tracks.

## Using the Test Runner Template

1. Ensure that your track has not already implemented a test runner. If there is, there will be a `https://github.com/exercism/<track>-test-runner` repository (i.e. if your track's slug is `python`, the test runner repo would be `https://github.com/exercism/python-test-runner`)
2. Follow [GitHub's documentation](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-from-a-template) for creating a repository from a template repository
- Name your new repository based on your language track's slug (i.e. if your track is for Python, your test runner repo name is `python-test-runner`)
3. Remove this [Exercism Test Runner Template](#exercism-test-runner-template) section from the `README.md` file
4. Build the test runner, conforming to the [Test Runner interface specification](https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md).
- Update the files to match your track's needs. At the very least, you'll need to update `bin/run.sh`, `Dockerfile` and the test solutions in the `tests` directory
- Tip: look for `TODO:` comments to point you towards code that need updating
- Tip: look for `OPTIONAL:` comments to point you towards code that _could_ be useful

Once you're happy with your test runner, [open an issue on the exercism/automated-tests repo](https://github.com/exercism/automated-tests/issues/new?assignees=&labels=&template=new-test-runner.md&title=%5BNew+Test+Runner%5D+) to request an official test runner repository for your track.

# Exercism TRACK_NAME_HERE Test Runner

The Docker image to automatically run tests on TRACK_NAME_HERE solutions submitted to [Exercism].
The Docker image to automatically run tests on Visual Basic solutions submitted to [Exercism].

## Run the test runner

Expand Down
4 changes: 2 additions & 2 deletions bin/run-in-docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ output_dir="${3%/}"
mkdir -p "${output_dir}"

# Build the Docker image
docker build --rm -t exercism/test-runner .
docker build --rm -t exercism/vbnet-test-runner .

# Run the Docker image using the settings mimicking the production environment
docker run \
Expand All @@ -40,4 +40,4 @@ docker run \
--mount type=bind,src="${input_dir}",dst=/solution \
--mount type=bind,src="${output_dir}",dst=/output \
--mount type=tmpfs,dst=/tmp \
exercism/test-runner "${slug}" /solution /output
exercism/vbnet-test-runner "${slug}" /solution /output
4 changes: 2 additions & 2 deletions bin/run-tests-in-docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# ./bin/run-tests-in-docker.sh

# Build the Docker image
docker build --rm -t exercism/test-runner .
docker build --rm -t exercism/vbnet-test-runner .

# Run the Docker image using the settings mimicking the production environment
docker run \
Expand All @@ -24,4 +24,4 @@ docker run \
--mount type=tmpfs,dst=/tmp \
--workdir /opt/test-runner \
--entrypoint /opt/test-runner/bin/run-tests.sh \
exercism/test-runner
exercism/vbnet-test-runner
32 changes: 24 additions & 8 deletions bin/run-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,24 +19,40 @@ for test_dir in tests/*; do
test_dir_path=$(realpath "${test_dir}")
results_file_path="${test_dir_path}/results.json"
expected_results_file_path="${test_dir_path}/expected_results.json"
expected_results_original_file_path="${expected_results_file_path}.original"
tmp_results_file_path="/tmp/results.json"

bin/run.sh "${test_dir_name}" "${test_dir_path}" "${test_dir_path}"

# OPTIONAL: Normalize the results file
# If the results.json file contains information that changes between
# different test runs (e.g. timing information or paths), you should normalize
# the results file to allow the diff comparison below to work as expected
# sed -i -E \
# -e 's/Elapsed time: [0-9]+\.[0-9]+ seconds//g' \
# -e "s~${test_dir_path}~/solution~g" \
# "${results_file_path}"
# Normalize the results file
sed -i -E \
-e 's/Duration: [0-9]+ ms//g' \
-e 's/ \[0-9]+ ms]//g' \
-e "s~${test_dir_path}~/solution~g" \
"${results_file_path}"

# TODO: this is a temporary fix around the fact that tests are not returned in order
# and the .message property can thus not be checked
if [ "${test_dir_name}" == "example-all-fail" ] ||
[ "${test_dir_name}" == "example-partial-fail" ]; then
cp "${expected_results_file_path}" "${expected_results_original_file_path}"
actual_message=$(jq -r '.message' "${results_file_path}")
jq --arg m "${actual_message}" '.message = $m' "${expected_results_original_file_path}" > "${tmp_results_file_path}" && mv "${tmp_results_file_path}" "${expected_results_file_path}"
fi

echo "${test_dir_name}: comparing results.json to expected_results.json"
diff "${results_file_path}" "${expected_results_file_path}"

if [ $? -ne 0 ]; then
exit_code=1
fi

# TODO: this is a temporary fix around the fact that tests are not returned in order
# and the .message property can thus not be checked
if [ "${test_dir_name}" == "example-all-fail" ] ||
[ "${test_dir_name}" == "example-partial-fail" ]; then
mv "${expected_results_original_file_path}" "${expected_results_file_path}"
fi
done

exit ${exit_code}
46 changes: 29 additions & 17 deletions bin/run.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env sh
#!/usr/bin/env bash

# Synopsis:
# Run the test runner on a solution.
Expand All @@ -24,36 +24,48 @@ fi
slug="$1"
input_dir="${2%/}"
output_dir="${3%/}"
exercise=$(echo "${slug}" | sed -r 's/(^|-)([a-z])/\U\2/g')
tests_file="${input_dir}/$(jq -r '.files.test[0]' "${input_dir}/.meta/config.json")"
tests_file_original="${tests_file}.original"
results_file="${output_dir}/results.json"

# Create the output directory if it doesn't exist
mkdir -p "${output_dir}"

echo "${slug}: testing..."

cp "${tests_file}" "${tests_file_original}"

# Unskip tests
sed -i -E 's/Skip *:= *"Remove this Skip property to run this test"//' "${tests_file}"

pushd "${input_dir}" > /dev/null

dotnet restore > /dev/null

# Run the tests for the provided implementation file and redirect stdout and
# stderr to capture it
# TODO: Replace 'RUN_TESTS_COMMAND' with the command to run the tests
test_output=$(RUN_TESTS_COMMAND 2>&1)
test_output=$(dotnet test --no-restore 2>&1)
exit_code=$?

popd > /dev/null

# Restore the original file
mv -f "${tests_file_original}" "${tests_file}"

# Write the results.json file based on the exit code of the command that was
# just executed that tested the implementation file
if [ $? -eq 0 ]; then
if [ ${exit_code} -eq 0 ]; then
jq -n '{version: 1, status: "pass"}' > ${results_file}
else
# OPTIONAL: Sanitize the output
# In some cases, the test output might be overly verbose, in which case stripping
# the unneeded information can be very helpful to the student
# sanitized_test_output=$(printf "${test_output}" | sed -n '/Test results:/,$p')

# OPTIONAL: Manually add colors to the output to help scanning the output for errors
# If the test output does not contain colors to help identify failing (or passing)
# tests, it can be helpful to manually add colors to the output
# colorized_test_output=$(echo "${test_output}" \
# | GREP_COLOR='01;31' grep --color=always -E -e '^(ERROR:.*|.*failed)$|$' \
# | GREP_COLOR='01;32' grep --color=always -E -e '^.*passed$|$')

jq -n --arg output "${test_output}" '{version: 1, status: "fail", message: $output}' > ${results_file}
# Sanitize the output
if grep -q "matched the specified pattern" <<< "${test_output}" ; then
sanitized_test_output=$(printf "${test_output}" | sed -n -E -e '1,/matched the specified pattern.$/!p')
else
sanitized_test_output="${test_output}"
fi

jq -n --arg output "${sanitized_test_output}" '{version: 1, status: "fail", message: $output}' > ${results_file}
fi

echo "${slug}: done"
5 changes: 5 additions & 0 deletions tests/example-all-fail/.meta/Example.vb
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Public Module Leap
Public Function IsLeapYear(ByVal year As Integer) As Boolean
Return year Mod 400 = 0 OrElse (year Mod 100 <> 0 AndAlso year Mod 4 = 0)
End Function
End Module
22 changes: 22 additions & 0 deletions tests/example-all-fail/.meta/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{
"blurb": "Given a year, report if it is a leap year.",
"authors": [
"ch020"
],
"contributors": [
"axtens"
],
"files": {
"solution": [
"Leap.vb"
],
"test": [
"LeapTests.vb"
],
"example": [
".meta/Example.vb"
]
},
"source": "JavaRanch Cattle Drive, exercise 3",
"source_url": "http://www.javaranch.com/leap.jsp"
}
37 changes: 37 additions & 0 deletions tests/example-all-fail/.meta/tests.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# This is an auto-generated file.
#
# Regenerating this file via `configlet sync` will:
# - Recreate every `description` key/value pair
# - Recreate every `reimplements` key/value pair, where they exist in problem-specifications
# - Remove any `include = true` key/value pair (an omitted `include` key implies inclusion)
# - Preserve any other key/value pair
#
# As user-added comments (using the # character) will be removed when this file
# is regenerated, comments can be added via a `comment` key.

[6466b30d-519c-438e-935d-388224ab5223]
description = "year not divisible by 4 in common year"

[ac227e82-ee82-4a09-9eb6-4f84331ffdb0]
description = "year divisible by 2, not divisible by 4 in common year"

[4fe9b84c-8e65-489e-970b-856d60b8b78e]
description = "year divisible by 4, not divisible by 100 in leap year"

[7fc6aed7-e63c-48f5-ae05-5fe182f60a5d]
description = "year divisible by 4 and 5 is still a leap year"

[78a7848f-9667-4192-ae53-87b30c9a02dd]
description = "year divisible by 100, not divisible by 400 in common year"

[9d70f938-537c-40a6-ba19-f50739ce8bac]
description = "year divisible by 100 but not by 3 is still not a leap year"

[42ee56ad-d3e6-48f1-8e3f-c84078d916fc]
description = "year divisible by 400 is leap year"

[57902c77-6fe9-40de-8302-587b5c27121e]
description = "year divisible by 400 but not by 125 is still a leap year"

[c30331f6-f9f6-4881-ad38-8ca8c12520c1]
description = "year divisible by 200, not divisible by 400 in common year"
5 changes: 5 additions & 0 deletions tests/example-all-fail/Leap.vb
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Public Module Leap
Public Function IsLeapYear(ByVal year As Integer) As Boolean
Return year Mod 2 = 1
End Function
End Module
13 changes: 13 additions & 0 deletions tests/example-all-fail/Leap.vbproj
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<TargetFramework>net5.0</TargetFramework>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.8.3" />
<PackageReference Include="xunit" Version="2.4.1" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.3" />
</ItemGroup>

</Project>
48 changes: 48 additions & 0 deletions tests/example-all-fail/LeapTests.vb
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
Imports Xunit

Public Class LeapTests
<Fact>
Public Sub Year_not_divisible_by_4_in_common_year()
Assert.[False](Leap.IsLeapYear(2015))
End Sub

<Fact()>
Public Sub Year_divisible_by_2_not_divisible_by_4_in_common_year()
Assert.[False](Leap.IsLeapYear(1970))
End Sub

<Fact()>
Public Sub Year_divisible_by_4_not_divisible_by_100_in_leap_year()
Assert.[True](Leap.IsLeapYear(1996))
End Sub

<Fact()>
Public Sub Year_divisible_by_4_and_5_is_still_a_leap_year()
Assert.[True](Leap.IsLeapYear(1960))
End Sub

<Fact()>
Public Sub Year_divisible_by_100_not_divisible_by_400_in_common_year()
Assert.[False](Leap.IsLeapYear(2100))
End Sub

<Fact()>
Public Sub Year_divisible_by_100_but_not_by_3_is_still_not_a_leap_year()
Assert.[False](Leap.IsLeapYear(1900))
End Sub

<Fact()>
Public Sub Year_divisible_by_400_in_leap_year()
Assert.[True](Leap.IsLeapYear(2000))
End Sub

<Fact()>
Public Sub Year_divisible_by_400_but_not_by_125_is_still_a_leap_year()
Assert.[True](Leap.IsLeapYear(2400))
End Sub

<Fact()>
Public Sub Year_divisible_by_200_not_divisible_by_400_in_common_year()
Assert.[False](Leap.IsLeapYear(1800))
End Sub
End Class
2 changes: 1 addition & 1 deletion tests/example-all-fail/expected_results.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"version": 1,
"status": "fail",
"message": "TODO: replace with correct output"
"message": "[xUnit.net 00:00:00.65] Leap.LeapTests.Year_not_divisible_by_4_in_common_year [FAIL]\n[xUnit.net 00:00:00.65] Leap.LeapTests.Year_divisible_by_400_in_leap_year [FAIL]\n[xUnit.net 00:00:00.65] Leap.LeapTests.Year_divisible_by_400_but_not_by_125_is_still_a_leap_year [FAIL]\n[xUnit.net 00:00:00.65] Leap.LeapTests.Year_divisible_by_4_and_5_is_still_a_leap_year [FAIL]\n[xUnit.net 00:00:00.65] Leap.LeapTests.Year_divisible_by_4_not_divisible_by_100_in_leap_year [FAIL]\n Failed Leap.LeapTests.Year_not_divisible_by_4_in_common_year [14 ms]\n Error Message:\n Assert.False() Failure\nExpected: False\nActual: True\n Stack Trace:\n at Leap.LeapTests.Year_not_divisible_by_4_in_common_year() in /solution/LeapTests.vb:line 6\n Failed Leap.LeapTests.Year_divisible_by_400_in_leap_year [< 1 ms]\n Error Message:\n Assert.True() Failure\nExpected: True\nActual: False\n Stack Trace:\n at Leap.LeapTests.Year_divisible_by_400_in_leap_year() in /solution/LeapTests.vb:line 36\n Failed Leap.LeapTests.Year_divisible_by_400_but_not_by_125_is_still_a_leap_year [< 1 ms]\n Error Message:\n Assert.True() Failure\nExpected: True\nActual: False\n Stack Trace:\n at Leap.LeapTests.Year_divisible_by_400_but_not_by_125_is_still_a_leap_year() in /solution/LeapTests.vb:line 41\n Failed Leap.LeapTests.Year_divisible_by_4_and_5_is_still_a_leap_year [< 1 ms]\n Error Message:\n Assert.True() Failure\nExpected: True\nActual: False\n Stack Trace:\n at Leap.LeapTests.Year_divisible_by_4_and_5_is_still_a_leap_year() in /solution/LeapTests.vb:line 21\n Failed Leap.LeapTests.Year_divisible_by_4_not_divisible_by_100_in_leap_year [< 1 ms]\n Error Message:\n Assert.True() Failure\nExpected: True\nActual: False\n Stack Trace:\n at Leap.LeapTests.Year_divisible_by_4_not_divisible_by_100_in_leap_year() in /solution/LeapTests.vb:line 16\n\nFailed! - Failed: 5, Passed: 4, Skipped: 0, Total: 9, - /solution/bin/Debug/net5.0/Leap.dll (net5.0)"
}
5 changes: 5 additions & 0 deletions tests/example-empty-file/.meta/Example.vb
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Public Module Leap
Public Function IsLeapYear(ByVal year As Integer) As Boolean
Return year Mod 400 = 0 OrElse (year Mod 100 <> 0 AndAlso year Mod 4 = 0)
End Function
End Module
22 changes: 22 additions & 0 deletions tests/example-empty-file/.meta/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{
"blurb": "Given a year, report if it is a leap year.",
"authors": [
"ch020"
],
"contributors": [
"axtens"
],
"files": {
"solution": [
"Leap.vb"
],
"test": [
"LeapTests.vb"
],
"example": [
".meta/Example.vb"
]
},
"source": "JavaRanch Cattle Drive, exercise 3",
"source_url": "http://www.javaranch.com/leap.jsp"
}
Loading