The Panasonic ProfessorPosted by goatchurch at 1:51 PM
Rodney Brooks, Panasonic Professor of Robotics at MIT, called his talk at the Singularity Summit:
I haven't been involved in the Singularity Institute before, so I thought I'd check and see exactly what is meant by "The Singularity". The "Singularity" as defined by the Singularity Institute is:He then made some points about what the age of flight might have looked like to the Parisians in 1783 on seeing the first hot air balloon drift over their city.
The technological creation of a smarter than human intelligence.
and the questions are whether this will lead to opportunities and risks.
Now predicting the future is sometimes hard.
As I was looking through quotes about the future, I realized we didn't know how good we had it when Dan Quail was Vice President.The rest of the talk got into the history and development of robotics over the past 25 years at the MIT labs. Happily for me he covered the spectacular failure anyone has made towards cracking the obvious problem of human vision.
The future's here, maybe, just not everyone's got it. And every research lab in the world now uses this slogan: "The best way to predict the future is to invent it." -- and we're the ones who are going to invent it.
I think Arthur C. Clarke had it right when he said, "When it comes to technology, most people over-estimate it in the short term, but under-estimate it in the long-term."
The way we think about the future is often through Hollywood. But Hollywood has a very specific way of talking about the future... [which is to] take the world exactly as it is, and then we add one thing...
My point is that when an Artificial General Intelligence appears, the world is going to be a very different place than it is today. So it's not today's world and add in this really super-intelligent being; it's the world that's going to change over time. And I think, by the way, we will be long gone, but in a positive way.
So the world is going different before we have these General Intelligences. Notice I said "when", not "if".
Towards the end of the talk he ran through a series of scenarios about how artificial intelligences could emerge from things like home robots looking after us, or brain implants developed onwards from cochlea implants which already exist.
In these proposed scenes he makes the point that we will be different from the way we are now by the time the Singularity happens. It might be that there is no us and them at the precise turning point, so we might not notice.
In the context of the historical observation (not necessarily proven) that war-making is what really drives technology along, I picked up on the final question at the end of the speech. After talking about the company he founded called I Robot which has supplied products to all sectors of society including the military, there was the following exchange:
Professor Panasonic: There were reports that the PackBots been equiped with machine guns. That's not true. None of the PackBots have had machine guns. The Talon from Foster-Miller has had a weapon on it, all with safety circuit and a human in the loop. I think it's an interesting question. When we want to allow robots to have independent targeting authority, I think now is the time to act. There is a bunch of ethics conferences coming up in the next year. I think it's time to put this into the Geneva Conventions -- some governments do go along with the Geneva Conventions -- and I think it's time to think about that.Well... What a sterling demonstration of back-peddling, as well as the principle that it's impossible to get someone to understand something when their income depends on their not understanding it.
Question: You said, "Some governments" follow the Geneva Conventions, but apparently not the one you've done some work for. Is it a good idea to be developing AI in robots for the US Government? In my mind that could lead to some of the worst nightmare scenarios.
Professor Panasonic: That's in a sense nothing to do with AI. That's a question which has faced scientists since the time of Da Vinci, who was completely funded by doing military work for his patrons. So that's an issue which scientists have had to deal with for hundreds of years independent of the AI question. And I think it's the responsibility of scientists to worry about controls and how things are used, and I think that the Geneva Conventions have been a good way of doing it. We've seen very little biological weaponry appear because it was banned by the Geneva Conventions. I think it has been successful. There are perturbations. Governments do change. Governments can change. People can change the Governments. And I think it's going to be an ongoing question for a long time. But I don't think it's AI specific. I think I'm finished, sorry.
The real question is what's the difference between robots wandering around with weapons, and a minefield? Since the problem with dumb minefields is that they are a gift that keeps on killing, maybe it's the high-tech nature of robots that makes them better. We know how good software is, don't we, boys and girls, especially when it gets contracted out to the private sector. You don't think our lives count more than our votes, do you? The guys who gave us the Enforcement Droid Series 209 in that movie in 1987 probably didn't go far enough, with what may be coming down the pipeline.
I expect my SF writers to go to places where Professors of Robotics fear to tread. The people at those ethics conferences he speaks of are going to be writing Mundane-SF. Probably without sufficient readability or characterization. But it's all there for the picking if anyone bothers to find it on the internet for free.
Anyways, that's just a little side issue of mine. It's always good to have a bit of passion about something, such as a healthy hatred for things likely to kill you and your fellow human beings.
What was really important was the Professor's statement that when (his "when", my "if") the Singularity happens, it will already be quite a different world to the way it is now. My observation of attempts to write about the Singularity is it's a speculation too far. It leaps over so much that you tend to see stories about a world that's exactly the same as it is today, but with this one thing different.
If Mundane-SF is about anything, it's about filling in the gigantic gaps relating to the interesting, likely, and possible scenarios in future history as it may occur. Conventional SF has shown itself wholly unwilling to go there. At its core, Mundane-SF is about originality. The truth is stranger than fiction, so why not make a little bit of room for the truth to come in and visit. Show it around. Get used to it. Don't scare it away with all your bright lights and noisy trope-ic nonsense.
Stories which feature thinking computers are almost always unoriginal and generally unenlightened by the scads of changes that have got to happen to us before the Singularity. Whether it exists or not is debatable, but many more ought to be able to see that it's not producing satisfying SF. It's like going straight for the candy instead of saving your appetite for a proper meal.
This trope has got to go.
P.S. Picture at top is of three Goliath tracked mines, manufactured in 1942 to carry up to 100kg of high explosives each. Imagine how much more advanced we can make these things 65 years later.