diff --git a/docs/Learning-Environment-Design-Agents.md b/docs/Learning-Environment-Design-Agents.md
index 91e31b2e51..2f1dcc0808 100644
--- a/docs/Learning-Environment-Design-Agents.md
+++ b/docs/Learning-Environment-Design-Agents.md
@@ -19,6 +19,8 @@
- [RayCast Observation Summary & Best Practices](#raycast-observation-summary--best-practices)
- [Variable Length Observations](#variable-length-observations)
- [Variable Length Observation Summary & Best Practices](#variable-length-observation-summary--best-practices)
+ - [Goal Signal](#goal-signal)
+ - [Goal Signal Summary & Best Practices](#goal-signal-summary--best-practices)
- [Actions and Actuators](#actions-and-actuators)
- [Continuous Actions](#continuous-actions)
- [Discrete Actions](#discrete-actions)
@@ -560,6 +562,36 @@ between -1 and 1.
of an entity to the `BufferSensor`.
- Normalize the entities observations before feeding them into the `BufferSensor`.
+### Goal Signal
+
+It is possible for agents to collect observations that will be treated as "goal signal".
+A goal signal is used to condition the policy of the agent, meaning that if the goal
+changes, the policy (i.e. the mapping from observations to actions) will change
+as well. Note that this is true
+for any observation since all observations influence the policy of the Agent to
+some degree. But by specifying a goal signal explicitly, we can make this conditioning
+more important to the agent. This feature can be used in settings where an agent
+must learn to solve different tasks that are similar by some aspects because the
+agent will learn to reuse learnings from different tasks to generalize better.
+In Unity, you can specify that a `VectorSensor` or
+a `CameraSensor` is a goal by attaching a `VectorSensorComponent` or a
+`CameraSensorComponent` to the Agent and selecting `Goal Signal` as `Observation Type`.
+On the trainer side, there are two different ways to condition the policy. This
+setting is determined by the
+[conditioning_type parameter](Training-Configuration-File.md#common-trainer-configurations).
+If set to `hyper` (default) a [HyperNetwork](https://arxiv.org/pdf/1609.09106.pdf)
+will be used to generate some of the
+weights of the policy using the goal observations as input. Note that using a
+HyperNetwork requires a lot of computations, it is recommended to use a smaller
+number of hidden units in the policy to alleviate this.
+If set to `none` the goal signal will be considered as regular observations.
+
+#### Goal Signal Summary & Best Practices
+ - Attach a `VectorSensorComponent` or `CameraSensorComponent` to an agent and
+ set the observation type to goal to use the feature.
+ - Set the conditioning_type parameter in the training configuration.
+ - Reduce the number of hidden units in the network when using the HyperNetwork
+ conditioning type.
## Actions and Actuators
diff --git a/docs/Training-Configuration-File.md b/docs/Training-Configuration-File.md
index 4c883ec43f..1c9a49703c 100644
--- a/docs/Training-Configuration-File.md
+++ b/docs/Training-Configuration-File.md
@@ -43,6 +43,7 @@ choice of the trainer (which we review on subsequent sections).
| `network_settings -> num_layers` | (default = `2`) The number of hidden layers in the neural network. Corresponds to how many hidden layers are present after the observation input, or after the CNN encoding of the visual observation. For simple problems, fewer layers are likely to train faster and more efficiently. More layers may be necessary for more complex control problems.
Typical range: `1` - `3` |
| `network_settings -> normalize` | (default = `false`) Whether normalization is applied to the vector observation inputs. This normalization is based on the running average and variance of the vector observation. Normalization can be helpful in cases with complex continuous control problems, but may be harmful with simpler discrete control problems. |
| `network_settings -> vis_encode_type` | (default = `simple`) Encoder type for encoding visual observations.
`simple` (default) uses a simple encoder which consists of two convolutional layers, `nature_cnn` uses the CNN implementation proposed by [Mnih et al.](https://www.nature.com/articles/nature14236), consisting of three convolutional layers, and `resnet` uses the [IMPALA Resnet](https://arxiv.org/abs/1802.01561) consisting of three stacked layers, each with two residual blocks, making a much larger network than the other two. `match3` is a smaller CNN ([Gudmundsoon et al.](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)) that is optimized for board games, and can be used down to visual observation sizes of 5x5. |
+| `network_settings -> conditioning_type` | (default = `hyper`) Conditioning type for the policy using goal observations.
`none` treats the goal observations as regular observations, `hyper` (default) uses a HyperNetwork with goal observations as input to generate some of the weights of the policy. Note that when using `hyper` the number of parameters of the network increases greatly. Therefore, it is recommended to reduce the number of `hidden_units` when using this `conditioning_type`
## Trainer-specific Configurations