I’m currently working with IBM Watson OpenScale and I would like some guidance on how to properly configure the payload and feedback datasets when evaluating a model—especially for Drift v2.
Could you please provide a clear explanation or tutorial covering:
The expected structure of the payload CSV
The expected structure of the feedback CSV
Which columns are mandatory (e.g., predicted label, probabilities, etc.)
How class probabilities should be formatted (single column vs multiple columns)
The correct mapping between dataset columns and OpenScale configuration parameters
In my case, I was able to successfully monitor the model by:
Sending the feedback dataset as the original dataset (including the target column)
Sending the payload dataset without the target column
This worked for general monitoring. However, when I tried to evaluate Drift v2, I encountered the following error:
“All the values of Class Probability for No column are Nulls” (AQDD1035E)
This makes me think I might be missing something in how I structure or send the class probabilities in the payload dataset.
Could you clarify:
Whether the payload must explicitly include separate columns for each class probability (e.g.,
prob_No,prob_Yes)If arrays like
[0.97, 0.03]are supported or notAnd how exactly OpenScale expects to map probabilities to each class
Any detailed example (CSV format or sample schema) would be extremely helpful.
Thanks in advance!