I don't understand why people are surprised by this. AIs are written by people with bias, tested by people with bias, and trained on data sets that contain biases. Why are we shocked that the AIs end up biased? Seems a foregone conclusion to me until we can figure out a better way for them to work.
@mike probably because we still want to believe that Data is pure and unbiased, so if we just feed the machine enough of it it will magically become unbiased
@clerical I would love it if the data was pure and unbiased, but because these AIs are created the way they are, they're probably going to show MORE bias, not less. A human will at least pretend that they're not being biased (or make an effort to not be overt about it), and AI doesn't know it's "supposed" to do that. Without AIs and data sets specifically curated to eliminate those biases, they're obviously going to be there. As far as I'm aware, no one is curating their data sets that way.
@mike We have been discussing this with recruitment. If you Big Data your way to a database of successful recruitment, you will quickly run into some pretty heavy bias going back.
Fosstodon is a Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.