fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

10K
active users

Sharon Machlis

Generative AI bias can be substantially *worse* than in society at large. One example: “Women made up a tiny fraction of the images generated for the keyword ‘judge’ — about 3% — when in reality 34% of US judges are women . . . .In the Stable Diffusion results, women were not only underrepresented in high-paying occupations, they were also overrepresented in low-paying ones.”

bloomberg.com/graphics/2023-ge

@smach What would be fascinating (if possible) would be to examine the training data and see if these biases were matched there as well, or if it was an algorithmic bias selectively choosing from a less biased dataset.

My money's on a mix of both, but with the bulk of it primarily coming from the dataset.

Regardless, I don't consider either source absolving generative #AI from the consequences of inbuilt bias.

@Blort @smach I am not a machine learning expert (more of a novice), but it seems to me like this might be an inherent property of binary classifiers.

@Blort @smach I don't think there's a difference between bias in the dataset vs bias in the "algorithm". The weights for the model that govern its behavior come purely from the training set.

@smach I am entirely cynical enough to believe that -some- of the dudes pushing so hard for AI right now are very consciously aware of this and see it as a feature, not a bug.

@CJPaloma It's pretty clear that some (NOT all!) are unbothered by it. In general, though, the gender - and racial - imbalance in the field vs population as a whole somewhat by definition means a higher risk of these kinds of outcomes.

@smach yeah, I get that the math based realities would already recreate inequities...and I also appreciate the need to keep pointing this pretty fundamental issue out....

What boggles my mind is the hubris and stunted self centeredness of folks being "unbothered" by it.

I mean: it would seem to me that any half way decent -human being- would...uh, simply not charge ahead (even towards a hoped for pile of money) if they know it's gonna replicate current inequities...

Yet here we are.

@smach

I can see why musk would fund a dis-information bot.

More noise to make his own lies increasingly 'debatable'.

@smach I saw an AI tool on Github that generates criminals' faces and immediately thought "this is going to generate a disproportionate amount of black people"
it is no longer on Github

@smach @UlrichJunker I’m really surprised. I thought they drew from distributions. Is some perverse outcome of the perverse attempt to eliminate “hallucinations”? #llm #GenerativeAI #Aiethics #foundationModels #AIBias

@smach @j2bryson not sure how it works. It may be that the dataset is not representative of the modern society. It may also be that there is a kind of majority voting effect in the learned network. If the alternative with the highest probability is chosen, it will be chosen in all cases, meaning that the probability distribution describing the network’s behavior is not at all the one that has been learned by the network. I am not an expert and just posing questions.

@j2bryson @smach @UlrichJunker Maybe old stereotypes have accumulated frequency (including traditional use of “he” for people of unspecified gender).