3
$\begingroup$

I want to evaluate a model that forecasts the amount of payments a company will receive from its customers on a certain day. I want to compare last years forecasts with last years actual payments by evaluating the model's accuracy.

The trick is I'd like to punish the model more for overshooting (predicting over the actual) than undershooting (predicting less than the actual).

Is there a standard way of going about this?

$\endgroup$

1 Answer 1

2
$\begingroup$

The standard method of doing this (at least in machine learning) is defining a loss function which penalizes overshooting more than undershooting.

For example, if the actual payment is $x$ and the forecasted payment is $\hat{x}$, then standard regression minimizes

$E[(x-\hat{x})^2]$

where the loss function is the quadratic loss function, $(x-\hat{x})^2$.

To penalize overshooting more than undershooting, you can define the piecewise loss function

\begin{cases} L(x,\hat{x})=(x-\hat{x})^2 & x>\hat{x} \\ L(x,\hat{x})=c(x-\hat{x})^2 & x\leq \hat{x}, c>1 \end{cases}

and have your model/algorithm minimize that.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.