Zarathustra[H]
Extremely [H]
- Joined
- Oct 29, 2000
- Messages
- 36,498
Recent research published in the journal Science has concluded that AI and machine learning programs may need an AI stereotype catcher, after showing that various AI systems relatively new language abilities may become an instrument of unintentional discrimination based on gender, race, age and ethnicity.
The problem is apparently that machine learning algorithms, since they are designed to learn on their own, pick up on human biases and learn them as well. I think this shouldn't be a huge surprise since Microsoft's adventures with Tay showed this pretty clearly.
The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.
The problem is apparently that machine learning algorithms, since they are designed to learn on their own, pick up on human biases and learn them as well. I think this shouldn't be a huge surprise since Microsoft's adventures with Tay showed this pretty clearly.
The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.