Evaluation Step

The Evaluation step is performed separately for each target variable, and it contains two tabs: Evaluation and Annotations.

By the time you reach this step, you already have built your predictive models. Like any tool, your predictive models need to be tested on data that were not presented to the models during their training. This is also very similar to quality control, which needs to be applied to items coming out of production lines to ensure they meet certain specifications and standards. To do this, you test your models with data sets that were unseen before. In this case, the validation data set can help. The aim here is to see how well your models will perform on future data during the later and most important stage of deployment. The ability to predict new data is known as generalization. If your models did not generalize well on the validation data, it is recommended that you investigate the conditions and settings under which they were built and try creating more models that meet your needs.