Monday, January 14, 2008

Artificial Inteligence (AI)

As many of you know, I studied [Artificial Inteligence] (AI) in uni (please insert easy jokes here) and, although my working life has taken me down a path away from all the knowledge I learned in uni, I have not lost interest in the subject.

The other day, in one of those brain meanderings that carries you away from the misery of daily life, I thought of a theory that is probably neither new nor specially revealing.

AI, as a subject, has been branching for some time in speciallised branches, such as Natural Language (as a base for the development of a form of comunication whether it be for programming purpouses or control, etc.), developments in Machine Vision (object detection algorythms, tracking, etc.), Genetic Algorythms (originally thought as the method to develop "learning" in an artificial inteligence), Logic and Scientific Method (as a form of analisys of the human thought) and a very long etcetera of side subjects.

All who know about AI know about the famous [Turing Test], a method invented by the mathematician [Alan Turing] (who is thought of as the father of AI even before the existance of computers as we know them today). The test dictated, in very few words, that a computer could be considered inteligent when, in a blind verbal interchange the examiner is not capable of differentiating the human subject from the machine subject. "Blind verbal interchange" basically means through keyboard and screen, whithout knowing who or what is at the other side.

Throughout the history of AI there have been several programs that pretended to be inteligent. The most famous one is probably [Eliza]. I personally had a copy and it took me all of 10 minutes to figure out how it worked, but thousands of people confessed the most outrageous intimacies to this little piece of software. This program is one of the reasons for the choice of my academic career. It was modelled on a psichoanalist and it replied to you with questions generated from the sentences you wrote into it, plus some "canned" replies.

The problem was that Eliza didn't really learn anything. That's where my theory starts.

In a nutshell, the program must learn to ask itself questions and learn from the replies it obtains. That is, for a program to be really inteligent it must learn, store the conceptual information it obtains, extrapolate knowledge from that data and be capable of finding doubts such knowledge supposes. It must also be able to understand when it receives an answer to such doubts.

This learning, naturally, cannot be limited to words, but must also extend to the concepts those words communicate. That is where the problem lies, since no one has been able to come up with a way to code those concepts in a coherent structure which is capable of containing the chaos of reality. All attempts made have ended up being poor subgroups uncapable of describing objects in more than a limited fashion, describe actions as a series of steps, etc.

However, we have programs that are capable of trajectory calculations of moving objects taking into account all (or most) of the variables existing in a real word, such as wind, air density, objects in the way, etc.

Will we be able to synthesise concepts such as "irritation" explaining all its implications? And, of course, similarities such as "itching" and "rash" and the relation of these concepts. How about more complex concepts such as feelings? How to synthesise happiness, sadness, love, loneliness, hate, anger? And how to synthesise the necessary explanations of how to go from one state to the other?

Is it just a question of stored memory space and processing power? The technical capabilities in that aspect are astronomically different than from when I started my career.. and that was "only" 14 years ago.

... and still, the research in this interesting field is meagre and badly funded. we instead develop data-crunching programs with prettier, more intuitive interfaces, but nevertheless, the same stuff we've been doing forever...



5 comments:

Cobalt said...

"cannot be limited to words, but must also extend to the concepts those words communicate"

I agree about this, but the problem is, how can we "express" concepts? Expressing them in words seems like the natural way, but then we are back to square one.

Finding a way to express the concepts mathematically will be challenging, but that seems like the way to go.

Mark said...

I actually think it's only a question of grain. Humans understand concepts by relating to other concepts for comparison, Gathering experience is only a matter of adding data from perception, thus adding to the knowledge of the world.

AI will thus be true when a system can be created that will be able to gather it's experience and store it as comprehensible sensory data (though not necessarily comprehensible to us).

There have been several attempts at expressing concepts mathematically (thus the subjects of Natural Language, Machine Vision, ...). I personally think that's not the way to go. We are trying it at a much to high a level when we should start as we humans do: as babies with just pure instinct and a couple of "rules".

Thank you for your response, Cobalt. Sorry it took me so long to reply

Cobalt said...

I think much of the concepts that you've been describing is hard, if not impossible, to code with current programming capabilities. The problem is that we continue to treat the programs like mindless robots, telling them WHAT to do.

int i = getNextInt();
System.out.println(i);
if(i > 10){
method1();
}
else{
method2();
}

If we can stop programming instructions on what to do and how to do and start programming conceptually (why), then that would be a step forward for learning and decision making programs such as AI s. I'm not sure if there is a language that supports that kind of functionality though...

Mark said...

Too true. They say that kids who constantly ask "why" are the ones who show a better promise of intelligence.

But maybe the programming language is not actually the problem, after all, we could always invent a new language. The thing to ask for is how would we want that language to work? It may be that a new paradigm in program processing is needed. We have rule-based, event-based, sequential,... How would we need the new programming language to work?

Cobalt said...

I'm not too sure about the structure myself. But I'd think it'd be a largely condition based (more than just if...else... Something more....). I've been planning on implementing something similar for one of my other projects, OSCEAN (Open Souce Content Environment and Abstract Network), which is an open source programming environment where you can code the structure, rules ect. of the programming language and then program in it... It's under works now.