Skip to content

[Story] Rework Evaluator #2048

@jaheba

Description

@jaheba

The current Evaluator is inflexible and difficult to use.

The metrics which are used are fixed and it's not possible to choose which metrics should be calculated.

#1778 proposes a more flexible approach, where it's easier to define new metrics and to select which metrics should be calculated. Further, it caches intermediate results and can calculate multiple time-series at once resulting in improved performance.

A second focus should be on usability. #2045 makes using the evaluator more simple, and removes the use for make_evaluation_prediction.

There is further discussion on how one can create test-datasets and what their format should be: #2041

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions