Why are there people even working on creating which would infinite surpass human capabilities? I didn't vote for any of that! But because of these people, we all would have to take the consequences if it goes wrong. And there are really people like who think the solution is merging with

It would be digital created and needed to fed information. What if it knew all of our internet activities, and browsing history, what could possibly go wrong? We need this conversation!

@tinfoil_hat as soon as I read your message, I started writing. As soon as I realized I didn't have enough characters, I got out of bed and wrote you a blog post :)


TLDR what you're afraid of, cannot happen. Not in our lifetimes, and probably a whole lot longer after that.

If anything is not clear, please ask. I'm passionate about artificial intelligence and also studied brains for 9 years and did a PhD on the fastest neurons in our brain: the medial superior olive :)

@yarmo @tinfoil_hat how many neurons can 1 cpu emulate? Then how about if we optimize for neuron emulation in ASICs? How many cpus can be rented on AWS? I don't think it's a hardware problem. It's that we have little understanding of how the brain works, what intelligence is.


@tomosaigon @tinfoil_hat After reading @yarmo's blog post I've downgraded my estimate from "this is going to kill us in a few decades" to "this will probably be a problem for the grandchildren".

He makes a very good point about the hardware requirements. We're a long way from hardware powerful enough that algorithms are a bottleneck (and multiple machines would have too much latency), which gives us enough time to at least work out FAI before Foom is a concern.

@tomosaigon @tinfoil_hat ASICs (and FPGAs) are what we need to watch out for. With access to something like that, the fundamental limitations on how intelligent the AI could get would be much higher (for some value of "much").

There's a certain threshold for how intelligent the AI needs to be (and what kind of intelligence the AI needs to have) to self-improve in the first place, though, and @yarmo's convinced me that that's not an issue at the moment, and won't be for at least half a decade.

@tomosaigon @tinfoil_hat @yarmo This is assuming that somebody's going out of their way to build a self-modifying unaligned AI, and will make enough progress to have that ready by the time hardware's good enough. Nobody's close at the moment (to my knowledge)… but we are closer to "self-improving" than "aligned".

Unless you're signed up for cryonics, I no longer think any of us are in any personal danger from an AI takeover.

Ordinary specification gaming, however… vkrakovna.wordpress.com/2018/0

Sign in to participate in the conversation

Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.