Skip to main content
added 1777 characters in body
Source Link
Ben
  • 142.3k
  • 7
  • 286
  • 649
  • Discrimination (in its proper sense) occurs when a variable is used in the decision process, not merely when the outcome is correlated with that variable. Formally, we discriminate with respect to a variable if the decision function in the process (i.e., the rating in this case) is a function of that variable.

  • Disparities in outcome with respect to a particular variable often occur even when there is no discrimination on that variable. This occurs when other characteristics in the decision function are correlated with the excluded variable. In cases where the excluded variable is a demographic variable (e.g., gender, race, age, etc.) correlation with other characteristics is ubiquitous, so disparities in outcome across demographic groups are to be expected.

  • It is possible to try to reduce disparities in outcomes across demographic groups through affirmative-action, which is a form of discrimination. If there are disparities in process-outcomes with respect to a variable, it is possible to narrow those disparities by using the variable as a decision-variable (i.e., by discriminating on that variable) in a way that favours groups that are "underrepresented" (i.e., groups with lower proportions of positive outcomes in the decision process).

  • You can't have it both ways --- either you want to avoid discrimination with respect to a particular characteristic, or you want to equalise process-outcomes with respect to that characteristic. If your goal is to "correct" disparities in outcomes with respect to a particular characteristic then don't kid yourself about what you are doing --- you are engaging in discrimination for the purposes affirmative action.

  • Many of the more complex difficulties you raise occur when you try to model things with various types of "black box" models of the type sometimes found in machine-learning applications, where there is complexity in relation to how the variables are taken into account in the model. In these cases there are some genuine issues that arise in trying to interpret and control the way that variables are used and so the problem can become non-trivial. Nevertheless, it is useful to bear in mind that using these models is a choice made by the user. If control over use of variables is important then it is better to use traditional statistical models (e.g., regression, GLMs, etc.) where there is greater clarity as to how variables are used in predictions and decisions.

In regard to this issue, it is useful to demarcate between characteristics that are inherent gender characteristics (e.g., pees standing up) versus characteristics that are merely correlated with gender (e.g., has an engineering degree). If you wish to avoid gender discrimination, this would usually entail removing gender as a predictor, and also removing any other characteristic that you consider to be an inherent gender characteristic. For example, if it happened to be the case that job applicants specify whether they pee standing up or sitting down, then that is a characteristic that is not strictly equivalent to gender, but one option effectively determines gender, so you would probably remove that characteristic as a predictor in the model.

Again, here we also need to note that certain complex machine-learning models might have a "black box" element where it is more difficult to understand and control the use of variables in prediction and decisions. Using these latter models is a choice, so if you value control highly (e.g., to constrain the use of variables in accordance with some ethical principle) then it is best to eschew these types of models and use traditional statistical models where it is simple to understand and control the use of input variables.

If you are talking about actual discrimination, as opposed to mere disparities in outcome, this is easy to constrain and check. All you need to do is to formulate your model in such a way that it does not use gender (and inherent gender characteristics) as predictors. Computers cannot make decisions on the basis of characteristics that you do not input into their model, so if you have control over this it should be quite simple to check the absence of discrimination. One way you could test this is to take a new test data point (e.g., a new applicant resume) and create two versions that flip the gender of the applicant, then input them into your model to make predictions/decisions --- if the model is operating in a non-discriminatory way then it should make the same prediction/decision for the new data point irrespective of whether you flip the gender.

Things become a bit harder when you use machine-learning models that try to figure out the relevant characteristics themselves, without your input. Even in this case, it should be possible for you to program your model so that it excludes predictors that you specify to be removed (e.g., gender), and it is certainly possible to test for discrimination using test data.

  • Discrimination (in its proper sense) occurs when a variable is used in the decision process, not merely when the outcome is correlated with that variable. Formally, we discriminate with respect to a variable if the decision function in the process (i.e., the rating in this case) is a function of that variable.

  • Disparities in outcome with respect to a particular variable often occur even when there is no discrimination on that variable. This occurs when other characteristics in the decision function are correlated with the excluded variable. In cases where the excluded variable is a demographic variable (e.g., gender, race, age, etc.) correlation with other characteristics is ubiquitous, so disparities in outcome across demographic groups are to be expected.

  • It is possible to try to reduce disparities in outcomes across demographic groups through affirmative-action, which is a form of discrimination. If there are disparities in process-outcomes with respect to a variable, it is possible to narrow those disparities by using the variable as a decision-variable (i.e., by discriminating on that variable) in a way that favours groups that are "underrepresented" (i.e., groups with lower proportions of positive outcomes in the decision process).

  • You can't have it both ways --- either you want to avoid discrimination with respect to a particular characteristic, or you want to equalise process-outcomes with respect to that characteristic. If your goal is to "correct" disparities in outcomes with respect to a particular characteristic then don't kid yourself about what you are doing --- you are engaging in discrimination for the purposes affirmative action.

In regard to this issue, it is useful to demarcate between characteristics that are inherent gender characteristics (e.g., pees standing up) versus characteristics that are merely correlated with gender (e.g., has an engineering degree). If you wish to avoid gender discrimination, this would usually entail removing gender as a predictor, and also removing any other characteristic that you consider to be an inherent gender characteristic. For example, if it happened to be the case that job applicants specify whether they pee standing up or sitting down, then that is a characteristic that is not strictly equivalent to gender, but one option effectively determines gender, so you would probably remove that characteristic as a predictor in the model.

If you are talking about actual discrimination, as opposed to mere disparities in outcome, this is easy to constrain and check. All you need to do is to formulate your model in such a way that it does not use gender (and inherent gender characteristics) as predictors. Computers cannot make decisions on the basis of characteristics that you do not input into their model, so if you have control over this it should be quite simple to check the absence of discrimination.

Things become a bit harder when you use machine-learning models that try to figure out the relevant characteristics themselves, without your input. Even in this case, it should be possible for you to program your model so that it excludes predictors that you specify to be removed (e.g., gender).

  • Discrimination (in its proper sense) occurs when a variable is used in the decision process, not merely when the outcome is correlated with that variable. Formally, we discriminate with respect to a variable if the decision function in the process (i.e., the rating in this case) is a function of that variable.

  • Disparities in outcome with respect to a particular variable often occur even when there is no discrimination on that variable. This occurs when other characteristics in the decision function are correlated with the excluded variable. In cases where the excluded variable is a demographic variable (e.g., gender, race, age, etc.) correlation with other characteristics is ubiquitous, so disparities in outcome across demographic groups are to be expected.

  • It is possible to try to reduce disparities in outcomes across demographic groups through affirmative-action, which is a form of discrimination. If there are disparities in process-outcomes with respect to a variable, it is possible to narrow those disparities by using the variable as a decision-variable (i.e., by discriminating on that variable) in a way that favours groups that are "underrepresented" (i.e., groups with lower proportions of positive outcomes in the decision process).

  • You can't have it both ways --- either you want to avoid discrimination with respect to a particular characteristic, or you want to equalise process-outcomes with respect to that characteristic. If your goal is to "correct" disparities in outcomes with respect to a particular characteristic then don't kid yourself about what you are doing --- you are engaging in discrimination for the purposes affirmative action.

  • Many of the more complex difficulties you raise occur when you try to model things with various types of "black box" models of the type sometimes found in machine-learning applications, where there is complexity in relation to how the variables are taken into account in the model. In these cases there are some genuine issues that arise in trying to interpret and control the way that variables are used and so the problem can become non-trivial. Nevertheless, it is useful to bear in mind that using these models is a choice made by the user. If control over use of variables is important then it is better to use traditional statistical models (e.g., regression, GLMs, etc.) where there is greater clarity as to how variables are used in predictions and decisions.

In regard to this issue, it is useful to demarcate between characteristics that are inherent gender characteristics (e.g., pees standing up) versus characteristics that are merely correlated with gender (e.g., has an engineering degree). If you wish to avoid gender discrimination, this would usually entail removing gender as a predictor, and also removing any other characteristic that you consider to be an inherent gender characteristic. For example, if it happened to be the case that job applicants specify whether they pee standing up or sitting down, then that is a characteristic that is not strictly equivalent to gender, but one option effectively determines gender, so you would probably remove that characteristic as a predictor in the model.

Again, here we also need to note that certain complex machine-learning models might have a "black box" element where it is more difficult to understand and control the use of variables in prediction and decisions. Using these latter models is a choice, so if you value control highly (e.g., to constrain the use of variables in accordance with some ethical principle) then it is best to eschew these types of models and use traditional statistical models where it is simple to understand and control the use of input variables.

If you are talking about actual discrimination, as opposed to mere disparities in outcome, this is easy to constrain and check. All you need to do is to formulate your model in such a way that it does not use gender (and inherent gender characteristics) as predictors. Computers cannot make decisions on the basis of characteristics that you do not input into their model, so if you have control over this it should be quite simple to check the absence of discrimination. One way you could test this is to take a new test data point (e.g., a new applicant resume) and create two versions that flip the gender of the applicant, then input them into your model to make predictions/decisions --- if the model is operating in a non-discriminatory way then it should make the same prediction/decision for the new data point irrespective of whether you flip the gender.

Things become a bit harder when you use machine-learning models that try to figure out the relevant characteristics themselves, without your input. Even in this case, it should be possible for you to program your model so that it excludes predictors that you specify to be removed (e.g., gender), and it is certainly possible to test for discrimination using test data.

added 536 characters in body
Source Link
Ben
  • 142.3k
  • 7
  • 286
  • 649

When you refer to "statistically discriminant" data, I assume that you just mean characteristics that are correlated with gender. If you don't wanwant these other characteristics there then you should simply remove them as predictors in the model. However, you should bear in mind that it is likely that many important characteristics will be correlated with gender. Any binary characteristic will be correlated with gender in any case when the proportion of males with that characteristic is different from the proportion of females with that characteristic. (Of course, soif those proportions are close you might find that they difference is not "statistically significant".) For more general variables the condition for non-zero correlation is also very weak. Thus, if you remove them all characteristics that show evidence of non-zero correlation with gender, you will almost certainly remove a number of important predictors, and you will not have much left.

When you refer to "statistically discriminant" data, I assume that you just mean characteristics that are correlated with gender. If you don't wan these other characteristics there then you should simply remove them as predictors in the model. However, you should bear in mind that it is likely that many important characteristics will be correlated with gender, so if you remove them all you will not have much left.

When you refer to "statistically discriminant" data, I assume that you just mean characteristics that are correlated with gender. If you don't want these other characteristics there then you should simply remove them as predictors in the model. However, you should bear in mind that it is likely that many important characteristics will be correlated with gender. Any binary characteristic will be correlated with gender in any case when the proportion of males with that characteristic is different from the proportion of females with that characteristic. (Of course, if those proportions are close you might find that they difference is not "statistically significant".) For more general variables the condition for non-zero correlation is also very weak. Thus, if you remove all characteristics that show evidence of non-zero correlation with gender, you will almost certainly remove a number of important predictors, and you will not have much left.

Source Link
Ben
  • 142.3k
  • 7
  • 286
  • 649

In order to build a model of this kind, it is important to first understand some basic statistical aspects of discrimination and process-outcomes. This requires understanding of statistical processes that rate objects on the basis of characteristics. In particular, it requires understanding the relationship between use of a characteristic for decision-making purposes (i.e., discrimination) and assessment of process-outcomes with respect to said characteristic. We start by noting the following:

  • Discrimination (in its proper sense) occurs when a variable is used in the decision process, not merely when the outcome is correlated with that variable. Formally, we discriminate with respect to a variable if the decision function in the process (i.e., the rating in this case) is a function of that variable.

  • Disparities in outcome with respect to a particular variable often occur even when there is no discrimination on that variable. This occurs when other characteristics in the decision function are correlated with the excluded variable. In cases where the excluded variable is a demographic variable (e.g., gender, race, age, etc.) correlation with other characteristics is ubiquitous, so disparities in outcome across demographic groups are to be expected.

  • It is possible to try to reduce disparities in outcomes across demographic groups through affirmative-action, which is a form of discrimination. If there are disparities in process-outcomes with respect to a variable, it is possible to narrow those disparities by using the variable as a decision-variable (i.e., by discriminating on that variable) in a way that favours groups that are "underrepresented" (i.e., groups with lower proportions of positive outcomes in the decision process).

  • You can't have it both ways --- either you want to avoid discrimination with respect to a particular characteristic, or you want to equalise process-outcomes with respect to that characteristic. If your goal is to "correct" disparities in outcomes with respect to a particular characteristic then don't kid yourself about what you are doing --- you are engaging in discrimination for the purposes affirmative action.

Once you understand these basic aspects of statistical decision-making processes, you will be able to formulate what your actual goal is in this case. In particular, you will need to decide whether you want a non-discriminatory process, which is likely to result in disparities of outcome across groups, or whether you want a discriminatory process designed to yield equal process outcomes (or something close to this). Ethically, this issue mimics the debate over non-discrimination versus affirmative-action.


Let's say I want to build a statistical model to predict some output from personal data, like a five star ranking to help recruiting new people. Let's say I also want to avoid gender discrimination, as an ethical constraint. Given two strictly equal profile apart from the gender, the output of the model should be the same.

It is easy to ensure that the ratings given from the model are not affected by a variable you want to exclude (e.g., gender). To do this, all you need to do is to remove this variable as a predictor in the model, so that it is not used in the rating decision. This will ensure that two profiles that are strictly equal, apart from that variable, are treated the same. However, it will not necessarily ensure that the model does not discriminate on the basis of another variable that is correlated with the excluded variable, and it will not generally lead to outcomes that are equal between genders. This is because gender is correlated with many other characteristics that might be used as predictive variables in your model, so we would generally expect outcomes to be unequal even in the absence of discrimination.

In regard to this issue, it is useful to demarcate between characteristics that are inherent gender characteristics (e.g., pees standing up) versus characteristics that are merely correlated with gender (e.g., has an engineering degree). If you wish to avoid gender discrimination, this would usually entail removing gender as a predictor, and also removing any other characteristic that you consider to be an inherent gender characteristic. For example, if it happened to be the case that job applicants specify whether they pee standing up or sitting down, then that is a characteristic that is not strictly equivalent to gender, but one option effectively determines gender, so you would probably remove that characteristic as a predictor in the model.

  1. Should I use the gender (or any data correlated to it) as an input and try to correct their effect, or avoid to use these data?

Correct what exactly? When you say "correct their effect" I am going to assume that you mean that you are considering "correcting" disparities in outcomes that are caused by predictors that are correlated with gender. If that is the case, and you use gender to try to correct an outcome disparity then you are effectively engaging in affirmative action --- i.e., you are programming your model to discriminate positively on gender, with a view to bringing the outcomes closer together. Whether you want to do this depends on your ethical goal in the model (avoiding discrimination vs. obtaining equal outcomes).

  1. How do I check the absence of discrimination against gender?

If you are talking about actual discrimination, as opposed to mere disparities in outcome, this is easy to constrain and check. All you need to do is to formulate your model in such a way that it does not use gender (and inherent gender characteristics) as predictors. Computers cannot make decisions on the basis of characteristics that you do not input into their model, so if you have control over this it should be quite simple to check the absence of discrimination.

Things become a bit harder when you use machine-learning models that try to figure out the relevant characteristics themselves, without your input. Even in this case, it should be possible for you to program your model so that it excludes predictors that you specify to be removed (e.g., gender).

  1. How do I correct my model for data that are statistically discriminant but I don't want to be for ethical reasons?

When you refer to "statistically discriminant" data, I assume that you just mean characteristics that are correlated with gender. If you don't wan these other characteristics there then you should simply remove them as predictors in the model. However, you should bear in mind that it is likely that many important characteristics will be correlated with gender, so if you remove them all you will not have much left.