• Jason2357@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Usually these models are trained on past data, and then applied going forward. So whatever bias was in the past data will be used as a predictive variable. There are plenty of facial feature characteristics that correlate with race, and when the model picks those because the past data is racially biased (because of over-policing, lack of opportunity, poverty, etc), they will be in the model. Guaranteed. These models absolutely do not care that correlation != causation. They are correlation machines.