Observations are grayscale?

Using the provided trainDopamine.py file I notice that the observations provided to the agent by the environment are a grayscale 82x82 image (0-255 values). Since the food and walls have various colors, is there some configuration option to cause this to run with color observations?

You can control this by setting retro=False and greyscale=False when creating the gym environment using AnimalAIEnv, you will then get the color observations.
This is also the default setup during evaluation.

It seems that default is not greyscale: https://github.com/beyretb/AnimalAI-Olympics/blob/master/animalai/animalai/envs/gym/environment.py#L35. @beyretb

Correct, greyscale defaults to False but retro defaults to True which overrides greyscale to True: https://github.com/beyretb/AnimalAI-Olympics/blob/620685cdb263fce5ba4b97d18d8b637ce01a66a0/animalai/animalai/envs/gym/environment.py#L66

In EvalAI test, can we control the test environment ?

Hello,

if you are referring to the parameters for running the environment itself then no. We instantiate the environment in the same way as in the testDocker.py file, meaning:

env = AnimalAIEnv(
        environment_filename='path_to_the_environment',
        seed=0,
        retro=False,
        n_arenas=1,
        worker_id=1,
        docker_training=True,
    )

All you have access to is the 84x84x3 pixel observations as well as the speed observations.
Is this what you were referring to?

Thank you, beyretb.
I got it!

So, if I train in retro mode, I need to manually convert observations and flatten actions? Any simpler solution?

In other words, all the wrappers that one would use in the gym environment used for training are unusable.

Yeah.All the examples and default parameter confused us.
In actual env, obs is tuple(84x84x3 pixel observations , the speed observations.) and action should be MultiDiscrete.
If you are using descrete action, ActionFlattener class(contained in environment.py) are useful.

Hi all,

@Gianni_De_Fabritiis which wrappers are you referring to?

The evaluation environment provided contains as much information as one might need. If you need less data (grey scale, lower resolution…) you’re free to modify the input given, and as @kmatsumoto points out, you have all the functions available in the code we provide to fall back on retro mode. Should be a one-liner really.

This is a standard setup for such competition where we need to prevent users from tempering with the environment.

It is common in openai/baselines to have many wrappers to the environment to modify it. It’s not that it’s impossible, just buggy to have to use one code for training and a different one for testing.