Skip to main content
Bumped by Community user
Notice removed Draw attention by CommunityBot
Bounty Ended with no winning answer by CommunityBot
Trimmed down
Source Link

The short version is that I would like to know what the confusion matrices (numbers of true positives, false positives, true negatives, and false negatives) should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.


Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in textdescribed in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

The short version is that I would like to know what the confusion matrices (numbers of true positives, false positives, true negatives, and false negatives) should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.


Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

The short version is that I would like to know what the confusion matrices (numbers of true positives, false positives, true negatives, and false negatives) should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.


Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

Defined confusion matrix
Source Link

The short version is that I would like to know what the confusion matrices (numbers of true positives, false positives, true negatives, and false negatives) should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.


Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

The short version is that I would like to know what the confusion matrices should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.


Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

The short version is that I would like to know what the confusion matrices (numbers of true positives, false positives, true negatives, and false negatives) should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.


Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

Add horizontal rule
Source Link

The short version is that I would like to know what the confusion matrices should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.

 

Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

The short version is that I would like to know what the confusion matrices should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.

Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

The short version is that I would like to know what the confusion matrices should be to achieve conditional use accuracy equality among two communities, one with 40% real positives and one with 60% real positives.

 

Here is the long version...

I am trying to understand the difference fairness metrics described in Understanding Fairness. It contains an interactive pair of pie charts representing two communities' confusion matrices. Blue portions represent real positives (RP); striped portions represent predicted positives. To the right of the charts are different fairness metrics and how well the proportions in the pie charts satisfy them.

Here is the original configuration:

described in text

The small red and blue circles are handles for adjusting the sizes of regions.

As shown in green, these fairness criteria are achieved:

  • Group fairness
  • Equalized Odds
  • Overall Accuracy Equality

I have been unable to come up with adjustments that satisfy Conditional Use Accuracy Equality. As described in the document, the two communities should have the same:

  • positive predictive value (PPV) or precision, i.e. TP / Predicted Positive, and
  • negative predictive value (NPV), i.e. TN / Predicted Negatives

Can Conditional Use Accuracy Equality be obtained in this scenario?

In an attempt to find satisfactory values of true positives and false positives for each community, I created a system of equalities for each community (cats and dogs) in Excel:

$$TP_{cat} + FN_{cat} = 40$$ $$TP_{dog} + FN_{dog} = 60$$

The equalities held within each species: $$PPV = \frac{TP}{TP + FP}$$ $$PPV = \frac{TN}{TN + FN}$$

The solver output these values (which I have rounded to two decimal places):

  • $TP_{cat} = 6.17$
  • $FP_{cat} = 0.44$
  • $TN_{cat} = 59.56$
  • $FN_{cat} = 33.83$
  • $TP_{dog} = 38.85$
  • $FP_{dog} = 2.77$
  • $TN_{dog} = 37.23$
  • $FN_{dog} = 21.15$

Because the website allows only integer values, this is as close as I was able to get: described in text

While the NPVs matched (64%), the PPVs differed (100% for cats and 93% for dogs). Rounding $FP_{cat}$ to 1 did not help.

The text implies that Conditional Use Accuracy Equality can be achieved. What values accomplish that?

In case it is useful, I made a Google Sheet available.

Notice added Draw attention by Ellen Spertus
Bounty Started worth 50 reputation by Ellen Spertus
added 248 characters in body; edited title
Source Link
Loading
Source Link
Loading