Of Brains and Bots
July 26, 2009 § 1 Comment
Farmer Wu Yu drives his rickshaw pulled by his self-made walking robot near his home in a village at the outskirts of Beijing.
The New York Times has a piece today about the dangers of computers becoming too smart. It was written in response to a group of scientists responding to Ray Kurzweil’s paean to the upcoming age of brilliant machines, when we will all be immortal and the world will be transformed beyond recognition. Oh right, the first half of that sentence pretty much implies the latter half. But transformed in even more ways! His book, The Singularity is Near, is fun and exciting-scary but not entirely plausible. But who can really know? The Times quotes Dr. Eric Horvitz, president of the Association for the Advancement of Artificial Intelligence, as saying “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.” Yeah, that’s the goofy side of Ray. But the man’s no slackwit.
I can’t help feeling more intrigued by computers getting smart than worried about it. Maybe that’s because there are already so many doomsday scenarios out there, most of them very plausible, and/or because I’ve always been a fan of intelligence. If we create our superiors and they take over, so be it. Not if they’re nasty soulless machines, sure, but who says that’s likely to be the case? Intelligence without emotion doesn’t really function, as researchers have finally figured out—emotion is the stimulus for thought—and intelligence + emotion without empathy is hard for me to envision. That advanced AI creations might not have empathy for us is entirely possible. We’re not doing so well with chimpanzees and gorillas, are we?
Kurzweil’s thesis is that once computers attain self-consciousness, they’ll be able to direct their own evolution, without our cultural repugnance to the idea, and get smarter by leaps and bounds. I’m not sure about this; intelligence still needs experience to shape it, and with experience comes culture—who says the smart computer will be so interested in making the even smarter computer?
The Times story is not about the dangers of the Kurzweil scenario so much as about the dangers of somewhat-smarter systems; ones that will take over jobs or be exploitable by criminals, governments and corporations. Those are worrisome possibilities, and since they’ll happen (have already begun to happen) before genius computers offer us immortality in their digital arms, they’re more likely to shape people’s response to advances in AI. It’s hard to imagine what would really stop progress, though—without the yuck factor involved in engineering babies or creating animals that are nothing but meat, and without the historical evidence of nuclear experiments, public opposition probably won’t grow fast enough.
People won’t like it when their computers can critique their job performance accurately, and when the first auto-driven automobiles crash, there will be plenty who will disregard statistics that they crash 1/10 as often as other cars, or whatever may be the case. But there are too many very smart techno-freaks out there. And they revere intelligence more than I do, having more of it to begin with.
So get ready for an interesting next 20 years. Climate crash, self-aware computers…this Great Recession, our first black president, whatever you think is new and different about this moment in history—you ain’t seen nothing yet.
And for the here and now: what about robots that eat household pests? Check out this article from New Scientist
i guess i have a hard time imagining that a computer could ever truly assess job performance.