Questions tagged [roc]
Receiver Operating Characteristic, also known as the ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system at different thresholds
925 questions
3 votes
3 answers
417 views
Is every weakly increasing function $[0,1]\rightarrow [0,1]$ a roc curve?
Let $\xi: [0,1]\rightarrow [0,1]$ be a weakly increasing function with $\xi(0)=0$ and $\xi(1)=1$. Must there be distributions $F$ and $G$ such that $\xi$ is a receiver-operator characteristic curve ...
1 vote
1 answer
121 views
Huge steps in AUROC plot
I'm building a model for a binary classification task. Because my dataset is pretty small (~86 samples with 68 class 0 and 18 class 1), I'm using a nested k-fold cross validation (5-inner loops and 5-...
2 votes
1 answer
114 views
How do I account for variability in item responses in my questionnaire raw score?
[Q] How I account for variability in item responses in a questionnaire? I have a 20 item questionnaire rating fear of falling in 20 activities on a 4 point scale (no fear (1) to very much fear(4)). ...
3 votes
1 answer
223 views
ROC curve threshold/cut off values
When we try to determine the optimal threshold for a continuous predictor variable, we draw a ROC curve and calculate the AUC value. If AUC<0.5 this means that the predictor has an inverse ...
0 votes
0 answers
58 views
Test of AI model performance doing combined tumour detection and classification
I am working on a radiology AI project that aims to detect and classify lesions (aka abnormalities) in CT scans across multiple organs. The test set (n=200) is a mix of normal scans (50%) and abnormal ...
0 votes
0 answers
80 views
Is using the TEST set to calculate the optimal threshold for binary classification and then calculating the accuracy on the same test set wrong
I have a dataset that has been split into 2 parts, train and test set. After training a model with the training set to classify between class 0 and 1, I used the sklearn roc_curve to calculate the ...
0 votes
0 answers
43 views
Testing the difference between two AUCs [duplicate]
Testing the difference between two AUCs seems to be straightforward. There are some relevant sources, like this one and links therein. I have an extremely simple case of two ROCs, similar to this ...
0 votes
0 answers
67 views
Concordance index for competing risks is 100%
I am externally validating a prognostic model which predicts death from cancer and am trying to compute the concordance index (C). I have predictions (risk probabilities) and observed times and events....
5 votes
1 answer
135 views
Have the authors appropriately applied repeated measures one way ANOVA and ROC analysis in this test of therapist adherence?
I'm reading a piece of research about the type of psychotherapy I practice. My intuition is that the use of statistics in the paper is flawed, but I am ideologically motivated to find flaws in this ...
0 votes
0 answers
49 views
ROC curve threshold coincides with mean of the predicted variable
I was fitting a bunch of logistic regression models to some dataset, where the variables to predict were all binary. After I fit the models, I then ran some simple code to use the ROC curve to find ...
0 votes
0 answers
78 views
Select best predictor from multiple ROC Curves in R
Packages library(ggplot2) library(dplyr) library(caret) library(plotROC) library(pROC) library(ROCR) 3 regression models ...
1 vote
0 answers
49 views
Interpreting Concordance Probability: Exclusion of Equal Outcomes and Role of Ties
The concordance probability $P(C)$ is defined as the likelihood that the model's predicted probabilities correctly rank the outcomes for two observations. It accounts for three types of pairwise ...
0 votes
0 answers
55 views
Se, Sp, NPV and PPV questions for repeated measures data
I have a dataset that contains multiple test results (expressed as %) per participant, at various time points post kidney transplant. The dataset also contains the rejection group the participant ...
3 votes
1 answer
143 views
Drawbacks of stratified test set bootstrapping for metric UQ
I am using test-set (percentile) bootstrapping to quantify the uncertainty of various model performance metrics, such as AUROC, AUPR, etc. To avoid any confusion, the approach is simply: bootstrap ...
0 votes
1 answer
222 views
Time-dependent area under the precision recall
How to compare the time-dependent precision recall (PR) receiver operating curve (ROC) values for two cox regression models at multiple time points? To compare two time-dependent AUC values, I would ...