Timeline for Avoiding social discrimination in model building
Current License: CC BY-SA 4.0
11 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jan 22, 2019 at 9:30 | comment | added | Eff | Let us continue this discussion in chat. | |
| Jan 22, 2019 at 9:29 | comment | added | Tim | @Eff No, if the model makes hiring decissions based on gender information, then it is biased. There is rapidly growing body of literature on algorithmic bias, conferences and courses on this topic, I highly recommend you checked them. The model is not good if it only learned to amplify the social bias. It is also not good because if you used it to make actual decisions, you could face high sues and fines for discriminatory practices. | |
| Jan 22, 2019 at 9:18 | comment | added | Eff | @Tim You said "What the Amazon story shows is that it is very hard to avoid the bias." The point of my first comment is that it really depends on what you mean by bias. Is it biased that my model predicts that a man is more likely to commit a crime? I wouldn't call that biased. I would call my model good. I don't see it as ethical to pretend that this is not true. This is just an example. The same can be true for hiring, and other issues. | |
| Jan 22, 2019 at 9:12 | comment | added | Tim | @Eff this is an ethical issue. If you have two suspects of a murder, one is black and another one is white, then even if the probability that the black one was a murderer was higher, you should not make the decision based on this information. However, this discussion is off-topic in here. And yes, I know the scientific literature as I took a whole course of psychology of prejudice and discrimination when I was at the university. | |
| Jan 22, 2019 at 9:08 | comment | added | Eff | @Tim The idea that stereotypes have nothing to do with the reality of the situation is unfortunately an assertion that you can only hold if you don't know any of the scientific literature related to stereotype accuracy. | |
| Jan 22, 2019 at 9:06 | comment | added | Eff | @Tim Nope. While there can be some truth to what you're saying, by and large it is not true. I urge you to read the book "Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophecy" by Lee Jussim. In this major book the author basically reviews the entire body of scientific literature on stereotypes, bias, self-fulfilling prophecies, etc. He shows that the evidence overwhelmingly shows that what you're describing is the minority of what is occurring. | |
| Jan 22, 2019 at 9:01 | comment | added | Tim | @Eff and this is how discrimination happens... Females earn less on average, so let's pay them less! The whole point of not having discriminative algorithms is that you should not use such information for making decissions, even if on average it seems to work. Moreover, if often works because of the social bias (e.g. we tent to pay more to males, African Amercians are more likely to go to jail for exactly the same crime as compared to Caucasian Americans etc.), so the stereotype is accurate because there is stereotype, not because of the nature of stereotyped group. | |
| Jan 22, 2019 at 8:42 | comment | added | Eff | The Amazon case does in no way conclusively show bias. It could simply be the phenomenon known as stereotype accuracy. Sometimes traits actually correlate with demographic variables. Here is an example. You know that person X is young and middle class. How likely are they to commit a violent crime? I now give you another piece of information: their sex. Does this change the likelihood? Of course. Is that bias? Of course not. It's what is known as stereotype accuracy. | |
| Jan 22, 2019 at 2:21 | comment | added | Alexis | @jbowman Resulting in little interpretive consequence, and perpetuation of in-built biases over time. | |
| Jan 19, 2019 at 0:26 | comment | added | jbowman | An alternative to removing everything that helps the model discriminate between (for concreteness) gender might be to train your model with gender, then when predicting (or whatever) run the prediction twice, once with each gender, averaging the results. | |
| Jan 14, 2019 at 8:49 | history | answered | Tim | CC BY-SA 4.0 |