Discrimination in AI and Algorithms

Mohi Beyki
2 min readOct 25, 2020

--

Algorithms are not discriminatory, but they might behave like one. How can we prevent such behavior?

In general, algorithms that learn patterns can learn discriminatory behaviors. Skewed input datasets usually are the cause of this issue.

How can we prevent this? My opinion on the matter is by starting with balanced datasets. My research is in Deep Learning, and in my work, I use huge datasets to train my neural networks. I can prevent discriminatory behavior by conducting a pre-analysis on the input dataset and finding out if it is skewed towards a minority group or not. However, since my research is in the healthcare section, my datasets are anonymized, and therefore, it is hard to figure this out.

Another solution might be to create a huge dataset by gathering data from multiple sources. In general, using multiple sources is a good practice. Different researchers use different techniques, and they have a different audience. Besides, researchers from different places in the world will use different populations in their experiments. This can lead to more diverse results. Therefore, using multiple sources should lead to increased diversity in the input.

Lastly, I believe we need to analyze our results to make sure they are not discriminatory. I’m going to give a rather bad example of this. If user input comes from a large and diverse group of people in the example of Chummy Maps, then I think it is safe to report those neighborhoods as “unsafe.” If they have doubts about their data, they can double-check with the police department to be sure. Additionally, I think user feedback can help a lot with these automated systems. It is costly and almost impossible to check all neighborhoods manually. Having a “report an issue” button would enable users to report false positives.

--

--

Mohi Beyki
0 Followers

I’m a graduate student of Computer Science & Applications at Virginia Tech. Tech Enthusiast, Deep Learning Expert!