I’ve been rereading David Brin’s first Uplift series — as astonishingly self-consistent a vision of galactic life as any science fiction writer has ever offered and quite appropriate to the Christmas season. In Brin’s imaginary universe, a mysterious and long gone race known as the progenitors set in place a unity of life across five galaxies largely by focusing moral valuations around the development and protection of sentience.
In the books, the first contact between humans and these galactic races is presented as having happened after humans have brought dolphins and chimps to full sentience. That not only makes humanity a wolfling culture, meaning one which evolved intelligence without outside help, but one with the prestige of already having two “clients” of its own. That combination engenders both fear and envy among the galactics, many of whom don’t believe wolflings possible and deeply resent humanity for having the sheer effrontery to compound the crime of existence by “uplifting” dolphins and chimps without galactic help.
It is those emotions set against the relationships, real and implied, between humans and members of other cultures that fuel the story lines in the books.
The books are wonderful, but they don’t tell you anything about aliens because, of course, the emotions and relationships are, like the political structures and expediences, all as human as the author. Nothing else is, I believe, possible for sane authors since a human writing about the sound a tree makes falling in the forest can only reflect that sound as heard, or imagined, through human ears. We’re pretty closely related to dolphins, but I very much doubt we’re capable of imagining how a dolphin would understand that event, because the things that make life beautiful to it are not the same as the things that make life beautiful to us.
It is shared evolution, not language, that unites humanity: We are what we see, our behaviors, responses and feelings co-evolved with our perceptions of the world outside ourselves. Thus, most people experience roughly the same emotional reaction to the sight or idea of a sheltered valley, but we have no idea what evolutionary response the same stimulus would trigger in a dolphin. More to the point of this column, we have no idea what it should mean to an artificial intelligence, supposing we were able to build one.
Tracy Kidder’s book, The Soul of a New Machine, isn’t about the soul of the machine at all, but about the commitment of the engineers developing it. Implicitly, however, there are assumptions of both value and transfer in the book: value in the sense that the human commitment, emotions and drives are assumed to be worthwhile, and transfer in the sense that the effect of these factors among the developers is presented as adding value to the machine.
No Working Definition of Intelligence
You don’t see consideration of anything remotely like that in the writings credited as fundamental among the artificial intelligence community. In fact, look closely at the literature in that field and it appears that none of the basic problems affecting artificial intelligence have been addressed by anyone recognized as important among the reigning experts. There isn’t, for example, a working definition of intelligence that can be used to unambiguously differentiate what is, and is not, intelligent. Apparently, they’ll know it when they see it — and meanwhile there’s no need for them to know what they’re working toward in order to work toward it.
In fact, after reviewing the literature, it’s not hard to believe that the longevity of the Turing Test (conversation indistinguishable from a human) as a working definition of intelligence reflects the general liberal tendency to believe that nearly all of the people they don’t know are subnormal. And it’s correspondingly hard not to wonder if the liberal repudiation of religion isn’t partially motivated by religion’s insistence on the fundamental equality of man.
The galactic cultures in Brin’s series see themselves as doing the work of God: transforming potential intelligence produced through natural evolution into true sapience through the uplift process. In their system of belief, Darwinian evolution cannot make this step by itself, thus the occasional appearance of a wolfling species like humans, or the progenitors themselves, reveals the hidden hand of God.
Brin admits of machine intelligence in his uplift universe, but doesn’t give them the obligations that come with sapience. There is a deep environmentalist position here in that making the jump from intelligence to sapience is presented as requiring a moment when the intelligence looks outside itself to see beauty in its own fundamental unity with the environment in which it evolved. Thus, Brin offers a position that accepts both evolution and creation: to his galactics, evolution can take an organism to pre-sentience, but it takes external intervention to put a soul into that new machine — an event he puts 50,000 years in our own past.
It is this event that powers the religious message in the Brin books, and it is the complete absence of any consideration of such an event that dooms the present coterie of AI researchers to continued irrelevance — whether or not we’ve already evolved computers to the point that uplift is actually possible.
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.
Artificial Intelligence is really an open-ended field which has never had a direction. While we’ve come to improve daily life by making machines smarter we’ve yet to understand what intelligence really is. Because of this basic inability to grasp the problem we’re trying to solve we make very little progress.
I think the first thing we must accept is that there can be no intelligence without free will. We can’t turn a machine on, ask it a question, get an intelligent answer, and turn it off. Without a life of experiences to base it’s decisions on it will not be able to make valid decisions at all. Thoughts aren’t just shut off – they keep running all the time.
As part of this the machine will have to be given the ability to create a unique identity for itself. You can’t expect an intelligent machine to be as inhuman as your coffee maker.
Machines will need artificial emotions before they can have artificial intelligence. Intelligence is not merely working out complex logic problems. They are far removed from formal logic and are as often as not irrational. We make all our decisions based on emotional needs. Knowledge, outside input, and physical needs are the root of these emotional needs but they are again far removed from the decision making process.
If the machine can learn, feel, and understand AM biguous concepts then it is intelligent. How intelligent it can be will depend on the power of it’s hardware and on how effecient it’s programming is.
If you only define something as intelligent if it can speak then many humans may not count. If you only define it as intelligent if it knows pi to the 10th place then Google counts while many humans don’t. What sets us apart is our ability to feel. If you can feel then you can think. Animals can feel and think. They may not be as intelligent as humans but they are intelligent. We should be able to create animal-level intelligences with todays technology. With CPU power climbing ever higher it’s not unlikely that we could eventually see human, or beyond, level artififcal intelligence.
We have some silly idea that we can make a computer that is as intelligent as a human but which has no needs or emotions. That seems highly unrealistic to me as that just isn’t the way intelligence has been proven to work. Look at autistic people. They have a lot of their emotional circuitry broken so while their brains can achieve massive calculations they aren’t as intelligent as those of us with working emotions.
Don’t think I need one.. "…Thus, Brin offers a position that accepts both evolution and creation: to his galactics, evolution can take an organism to pre-sentience, but it takes external intervention to put a soul into that new machine…". Maybe I AM just severely biased from the constant idiocy I find on news groups, but this usually implies "God did it!". For those of us that don’t assume that the deafault must be such, the constant name calling and attempts to claim that ‘we’ are not moral or intelligent without belief is as bad as arguements that AI will never happen without some sort of ‘special’ thing called a soul being glued on.
Maybe the general lack of family this year or anything else particularly Christmas like made me overreact a bit, but I can’t help but be a tad confused when someone makes a statement like above and tries to make is the key point during a season that is primarilly associated with sitting in a room unwrapping presents while feeling superior to other people for holding the opinion that we are spiritually better, never mind machines. When you start throwing in comments about ‘liberal tendencies’, which is all too often nothing more than non-liberals saying, "Our prejudices are normal, unlike those other people", it doesn’t help matters. The term is used wrongly by the people that ascribe to it and overused as an excuse for how everyone that disagrees with something they think is self evident by everyone else.
I AM sorry for any misinterpretation I may have made and seriously overreacted. However, you ironically managed to push all the wrong buttons with me, given recent experiences with both the so called liberals and the opinions of those that label themselves as the authority on morals and intelligence. I sometimes feel, while willingly admitting I AM probably no better in some cases, that the Ozy and Millie cartoon with Millie wearing a "I’m with stupid" T-Shirt, while holding a world globe is a bit too accurate. lol
Anyway… Hope your Christmas was more Christmassy than mine and happy new year.
Human brains are machines. Complicated, of course, but there’s no magic involved. Any argument made against machines ever being capable of anything can be applied with equal validity to our own brains.
I AM , of course, ignoring any religious nonsense. If you are of the opinion that some god has infused something mystical into humans, then of course you don’t see that biological brains aren’t anything special. That’s fine, you run along and have a good life while I continue complaining. As you leave, I might want to ask what the purpose of that brain organ is if your mysterious addition is what really runs the show, and why physical damage to that organ so easily interferes with the magical functions so thoughtfully provided, but never mind – you just go along now.
For the rest of us: those who deny AI need to remember that reflexivity is not a law of the universe. A hammer is a tool, a hammer cannot saw wood, therefore no tool can saw wood is instantly seen as an idiotic syllogism, but somehow equally stupid statements about the limitations of machines with regard to intelligence are happily swallowed whole. Artificial intelligence may be unreachable with a x86 processor, but x86 processors do not define the limitations of machines.
If machines cannot be intelligent, then we aren’t machines. That’s about as simply as it can be said. So, I’d like to ask those who insist that machine intelligence has limits one question:
If our brains aren’t machines, then what are they?
And merry Xmass to you too! (by the way,is there a
remedial reading course in your area you could take?)
Odd that I never read such concepts into Brin’s work… Then again, I AM not into name calling and I find it ironic that believers immediately label anyone that disagrees with them as ‘liberal’, which used to at one time mean, "Interested in the betterment of all people". Of course, it has been hijacked by nuts that believe the universe revolves around their opinions, even when those opinions are totally logically insane. The only difference between them and the ones that have tried (and often succeeded) in hijacking the conservative agenda is what nonsense they insist ‘must’ be true, despite increasingly persistent evidence that they are all wrong.
Personally, I AM not either a liberal or a conservative in the modern definition. If I was in some other parts of the world, I might be considered liberal, but only by the original definition. Ironically, my stand on religion would probably get me shoved into the liberal category as defined in the states by fools like you. The problems I have with it: Hypocrasy, refusal to accept when it is wrong about something, spreading fear instead of hope, war from intolerance, attempts to destroy knowledge that it find inconvenient, distorting that knowledge through ignorance and deceit when it can’t destroy it… I could go on.
As for the specific things you talk about.. The definition for intelligence and sapience started out so simple. Something had to be self aware, aware of its own emotions, aware of its own mortality, etc. Then research into animal behaviour made it a bit murkier. Later more research made it even murkier than that. Now newer research into the way the brain works seems to indicate that even basic function is completely inconsistent with perception. Instead of one voice thinking one things, recent research implies that our heads contain a miriad of voices, all trying to get our attention and one ‘collection point’ of sorts that spends the day screaming, "Shut up, I AM me!" Our ‘sapience’ appears to be little more than the fact that we are a little better at keeping all those voices shut away in the proper corners than other species. AI research, contrary to your claims tries to follow brain research fairly closely. Yes, there are fools involved that decide their solution is the best and refuse to give up on it, even when it hits a dead end, but that is true in every field. I AM sure that the first person to make bicycles with two equal sized tires had to contend with some twit that insisted it was a stupid idea and everyone should stick with the ones where the front tire was 2-3 times bigger and you needed a ladder to mount it.
However, you may be right in one respect. A lot of academia has been overrun in recent years by so called post modernists, who delight in rearraging their prejidices in convoluted and incomprehensible prose and calling it ‘thinking’. And, a fair number of AI researchers are now less concerned with the goal of human like behaviour than practical behaviour. However, for those of us that don’t attribute everything we don’t understand to the supernatural and find the arguement that some gestalt event is needed to achieve anything of value, including the transition to sapience, gibberish, such opinion as you post here is no different than the protestations of ancient priests that once insisted that demons caused disease, the sun revolved around the earth and that we had to be the center of the universe. After all, why would God make it any other way? Too bad they got it wrong, it was such a lovely idea.
Unfortunately for you and others who get all fuzzy about reality at times like this, you miss the point that all those people you falsly label don’t disagree, so much as know that we don’t understand anywhere near enough to make such a judgement. But then that is what believers do best, judge everything by a standard they can’t clearly define, can’t prove, can’t (by definition) duplicate or in any other way validate, other than to say, "<Favorite Diety Here> did it." Exactly what you ironically accuse the supposed liberal AI researchers of doing.