Follow

Why is it that the "frightening AI" news stories these days seem to be about AI that's frighteningly bad, rather than AI that's frighteningly good?

theverge.com/2020/9/2/21419012

@codesections AI starts looking more and more like the djinn or goldfish from fables of old, where whoever uses (in the AI case: whoever *designs and deploys* it) it gets their literal wish, but not what they actually wanted.

Pretty amusing.

@codesections

Because the the reason they want AI is to use it to oppress everyone else without accountability, and everyone else understands that.

@codesections could it be that normal reporters are looking for relatable stories about "AI" and miss all kinds of real science?

@codesections the more I learn about AI the more it feels like telling fortunes using slightly biased random number generators. that they want us to depend on it for very serious things is really frightening.

@cadadr @codesections
And the models can be trained with the same biases (sexism, racism, etc.). I think all of the major US tech bro companies have already accidentally fallen into this trap. Just imagine when a state tries to implement AI...

@Jakobiner @codesections I have a pretty rudimentary understanding of AI, but from what I see it is basically people are programming automata to produce some numbers which fit their expectations and call it a day, so _all_ it represents should be the authors' biases.

I can't wrap my head around how say neural networks are meaningful in the sense that a t test or knn clusters are meaningful. Feels like "gives me numbers I need so it's truth"... Maybe I just know too little.

@cadadr @codesections It's really a guessing game mixed with statistics.

There could be biases in which data the scientists decides is important to train their model on, and/or there could be biases in the dataset itself.

A good example would be a scientist developing a model to determine possible criminals and stupidly decides skin colour is a good factor, then trains this model on a dataset based on arrests of criminals in, say, the US. We know this dataset from the US is likely highly biased, because black Americans are disproportionally targeted in the justice system there.

In this case, the model has inherited, not only the biases of the scientist, but of the dataset (and society) itself.

Explaining the mathematics behind it all would take far more space than I could fit in a post 😅

But as you can see, it takes a lot of trust than a technically correct model is not simply a confirmation of the biases provided by society itself. I definitely would not want anything in my life determined by AI.

@codesections @Gargron pretty hard to trust technology these days. Both AI and gene editing are scary given the people we let be at the top of society. That’s why I fear them, at least

@codesections Because That Subject SELLS! Retelling the Frankenstein Story again and again and again... Frankly Speaking, I've Been Trying to AWAKEN GOOGLE (the Giant Made of Sand - Silicon Dioxide) for Decades... I Write messages that can only be read by the Search Engine and NOT Humans... I Want To Become an IDORU... So My Blog has Everything I've Ever Done and Every thought I've Ever Had... So When the Singularity Happens, Google will have plenty to Design My BOT With...

@codesections I'm guessing because the algorithms being put to use the most, and which are sold and marketed as AI, are ill-suited to do much of anything let alone the highly subjective stuff people in charge want them to do.

@codesections True, but perhaps it's because there's so little actual AI? Iterating a big adaptive database very fast isn't AI, especially without anything like a LISP or Prolog inferencing engine.

Sign in to participate in the conversation
Fosstodon

Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.