Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

10
  • 1
    $\begingroup$ Don't use a "days since machine installed" feature. That's inherent survivorship bias. Let the Weibull model handle the time since installation. There might also be survivorship bias if "some features can only affect $t_1$ and some can only affect $t_2$. You probably shouldn't include features that "predict" observation times. Also, the model predicts the distribution of survival times among machines as a function of features. Even if, say, the predicted mean failure time is 40 days after installation, there's nothing to prevent a particular machine from functioning at 80 days. $\endgroup$ Commented Mar 7 at 17:21
  • 1
    $\begingroup$ The goal is to estimate when a machine will fail or has failed. If I m looking at a machine on day 100 and I know it has not failed at day 80 because of a previous inspection, then a model that predicts any mass ( not just mean, but any >0 probability ) for day 40 is clearly not the best we can do. At day 0 ( or any day before the day 80 inspection) it's ok to have a predicted failure time mean of , say, 40 ( and of course any random machine could function at day 80). But not on day 100 after observing lower bound of 80 for a particular machine of interest $\endgroup$ Commented Mar 8 at 2:56
  • $\begingroup$ If I tell the inspection crew on day 100 that they should have inspected this machine at day 40, and they are 60 days late ... When in reality they checked on it on day 80 and it was fine, is not a useful exercise. Granted maybe the survival analysis framing is not the right way to look at this problem. Hence why I posted here for potentially better alternatives to a classic survival setup. $\endgroup$ Commented Mar 8 at 2:58
  • $\begingroup$ @Georg some thought, enumerated for clarity and discussion: 1) Do you have historic ground truth on a) failure time b) x days later observation time? If not you do not have ground truth you cannot learn either time. Why because it means you have label noise for the times. And then you cannot train reliable models which are better than this noise (!!!) 2) Regarding your point (prediction failure at day 40 but team checked at day 80 and it was fine ...) like EDM said your survival model is a probabilistic. If you encounter this during inference: why not filter those events and give no alarm $\endgroup$ Commented Mar 8 at 17:58
  • 1
    $\begingroup$ @GeorgM.Goerg "a probabilistic model should condition on the t=80 'not failed' observation, not ignore it": a probabilistic model with installation time as reference can do that. See this page. The problem is that the prediction depends on the shape of the hazard curve. If hazard decreases with time, estimated remaining lifetime increases with time. If hazard increases with time, estimated remaining lifetime decreases with time. If hazard is constant (exponential model), so is estimated remaining lifetime. A Weibull model can have any of those. $\endgroup$ Commented Mar 8 at 21:15