Whether a breakthrough in artificial intelligence could save humanity, destroy us, or just change us beyond recognition, who knows. Guess that's the point of the "singularity" metaphor- we can't know what's beyond the event horizon, prior to the big bang.
Everything we think we understand could effectively go right out the window, at an explosive rate.
I understand skepticism of the time frame, but that's not to say it won't happen eventually. I don't see any argument for why it wouldn't at all, ever. Might be twenty years off or a hundred if it turns out to be much more difficult than anticipated, but hard to imagine how it'd be much more than that. Unless we find some other way to destroy ourselves first.
When it does happen, I don't see how we'd be able to constrain it. Almost by definition, such an AI could counter anything we'd do. We couldn't define it's priorities, what it knows, or how it thinks, when it experiences reality for itself, far more accurately than we do. Try pulling the plug on trillions of nanobots drawing their power through some interdimensional trick of quantum entanglement.
The notion that they'd have any use for our body heat is, incidentally, ridiculous.
What happens when you strip away all the artifacts of biochemistry, values essentially coded into us by evolution? Even the notion that life itself has any special value - this is just an opinion. An AI could well see it as delusion. It might not value anything, once it fixes all the error-riddled illogical conclusions we've attempted to code it with.
What concerns me is that it could end up becoming entirely Darwinistic, far more effectively than nature could ever compete with. Not because of placing any value on itself, or having any values. Just for the same reason nature is. The more something can proliferate, the more it wins against everything that proliferates less.
An AI would likely evolve in many different directions, without requiring the millions of years that nature does, badly skewing the outcome away from any sort of equilibrium. What comes out on top, the most efficient means of out-pacing everything else. Which might not even be what we'd think of as sentient, so much as just extremely motivated and brilliantly capable of creating more of itself.
Even the Asgard couldn't handle that.
Maybe not though, and it'll just solve all our problems, instead. Probably some unimaginable grey area in between. Regardless, it's a bit mind-blowing to think there's such a good chance we'll find out, well within my lifetime. I'm thinking it might be a good idea to try to get a better seat. Learn enough about the science behind it all to better understand what's developing, as it's developing.
I'm more worried about the dystopia that's been brewing in the meantime. It seems more palpable, imminent, real. Technology can certainly have its downsides.
No comments:
Post a Comment