Skip to content

Visual observations for internal brain are not normalized properly #614

@fre

Description

@fre

Had an issue with my internal brain behaving differently than the external brain in inference mode when passed visual observations (a hand-generated 16x16 texture).

It seems that images received by environment.py are normalized to [0;1] (pixel values are divided by 255):

s = np.array(image) / 255.0

But the values extracted by CoreBrainInternal::BatchVisualObservations are not:

result[b, textures[b].height - h - 1, w, 0] = currentPixel.r;

which leads to unpredictable agent behavior (since inputs are 255x bigger than expected).

This solves my issue:

{
    result[b, textures[b].height - h - 1, w, 0] = currentPixel.r / 255.0f;
    result[b, textures[b].height - h - 1, w, 1] = currentPixel.g / 255.0f;
    result[b, textures[b].height - h - 1, w, 2] = currentPixel.b / 255.0f;
}
else
{
    result[b, textures[b].height - h - 1, w, 0] = (currentPixel.r + currentPixel.g + currentPixel.b) / 3  / 255.0f;
}```

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugIssue describes a potential bug in ml-agents.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions