This is a problem with our existing data sets. Humans create the sets of data that are used by these AIs, which allows even unconscious biases to be carried over to the AIs. We need to diversify the populations that are represented in these data sets, but that's easy to say harder to do apparently.
"Google’s Hate Speech-Detecting AI is Biased Against Black People"
@firstname.lastname@example.org This is a absurdist and possibly racist interpretation of the data.
The more sensible interpretations could be either that blacks are more hateful (which I doubt) or that simply black people use words like nigger more frequently than any other group.
The proper solution to this manufactured problem is to throw concepts like hate speech into the trash along with ideologies that produced it.
@matrix07012 It isn't a manufactured problem. The internet is filled with horrible speech. You can throw out the concept, but you can't throw out all of the hate that is spewed online.
@email@example.com By the manufactured problem, I was referring to the problem the article is describing. Because hate speech is subjective, even a human can't objectively moderate it, let alone an AI, which does not understand context, sarcasm etc at all.
But yes, the moral panic of today is about hate speech and Nazis. The fact that speech you find disgusting exists on the internet isn't proof of a problem and even if it was, the block button exists, the power button on your computer exists.
@zeh Because it's not just something that affects big corporations. Bias in data sets affects small initiatives even more than it does these big corporations. For example, Mycroft AI.
I can't say for certain, because I have no source backing this up, but I'd imagine that smaller projects like Mycroft probably have MORE of an issue with bias than larger corporations, simply because these large corporations have access to more people.
Fosstodon is a Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.