bit-tech.net

The story of artificial intelligence

Comments 1 to 25 of 26

Reply
PingCrosby 19th March 2012, 09:46 Quote
My Miele washing machine has more intelligence than some people I know
greigaitken 19th March 2012, 09:55 Quote
I finished my degree in AI and CS many years ago. Almost always the main bottleneck in creating useful programs was coding all the rules, limitations and options.
would be ideal to have a new programming environment where the user already has this done and can drag and drop common rules like lego.
eg. find fastest route to top of building, you shouldnt have to code gravity limitations, and walls etc. every time.
fodder 19th March 2012, 10:48 Quote
@PingCrosby - Lol. So true.

@ greigaitken - But, doesn't that negate the meaning of AI? 'Intelligence' implies the ability to integrate current knowledge to solve problems and increase that knowledge. If you just have a 'list reader' then it's not really intelligent, just a large storage of data with an efficient recovery mechanism. (Sounds like Google).

Sentient beings learn by doing (mostly). I think it is this ability of our brains to be plastic that makes us intelligent. IE - Learning to walk is a constant alteration of neuronal connections until our reflexes and central nervous system balance the movements required with our weight and gravity. No one tells us the value of gravity, or the weight of the limb and it's fulcrum.
BentAnat 19th March 2012, 11:29 Quote
@fodder:
Essentially, it's not all that fundamentally different.
We learn the use of our legs as well. Theoretically, given a few base rules, an AI is able to uild HUGE repositories of "learned data", which it could use to come up with new solutions.
Procssing that data is what computers struggle to do.
Just the basics of walking took years to get right, as there are so many factors to consider.
Bauul 19th March 2012, 11:54 Quote
I don't think we'll get close to mimicking true human intelligence in a computer whilst not really understanding how human intelligence works.

Compared to a computer, we're unbelievably strange and illogical. Many of our decisions are made unconsciously by a section of the brain we have limited control over. Our memories fluctuate and change, and our opinions endlessly shift based on stimulus that we receive in an enormously varied number of ways.

The fact our brains are biological, that they are built of cells that live, die, change and change again means our intelligence is in a constant state of flux. Add to that endless chemical influences from internal and external sources, and the result is so endlessly illogical and random it'd be impossible for a non-biological entity to ever replicate it truly.

I'm not saying you won't ever get intelligent machines, they just won't have the same intelligence as humans.
BLC 19th March 2012, 11:59 Quote
A while ago, the Skeptic's Guide to the Universe podcast did an interview with Michael Vassar, the president of the Singularity Institute for Artificial Intelligence (linky: http://www.theskepticsguide.org/archive/podcastinfo.aspx?mid=1&pid=218). It was pretty interesting stuff.

Interesting read; it's not something that I would have expected to find on bit-tech... Professor Sharkey may be right, however; at least for the time being. It's important to remember that no matter how much computing power you have or whatever algorithms you program, computers are fundamentally just big adding machines; albeit machines that can add up at inhuman speed. We really don't know how the human brain functions on such a fundamental level; until we know that, it may well be impossible to design a machine that can truly replicate emergent and "intelligent" behaviour. We've come a long way, but we're nowhere near that goal yet.

Incidentally, on the subject of Alan Turing, you should all go and visit Bletchley Park at some point. My girlfriend and I visited it around two weeks ago, and it was fascinating; if you do go, make sure to take one of the guided tours - our tour guide worked there during the war and knows so much about the entire site and all the work that went on there. There's quite a bit about Alan Turing there (including Gordon Brown's official apology letter), as well as the (fully working) rebuilt bombe, the Colossus and a huge collection of Enigma machines...

Plus you can also visit the National Museum of Computing and marvel at the four-foot diameter hard drive platters from the 60's/70's which only store 4mb per side; or get all nostalgic about all those old computers that you grew up with, now consigned to museum exhibits.
RedFlames 19th March 2012, 12:04 Quote
completely off topic and train of thought de-railing but...

anyone know where i can get a bigger/full-sized version of this:

http://images.bit-tech.net/content_images/2012/03/aisearch/honda-asimo-614x250.jpg


... carry on

[i'll read this and comment appropriately when i'm a little less dead...]
the_kille4 19th March 2012, 14:17 Quote
Quote:
Originally Posted by RedFlames
completely off topic and train of thought de-railing but...

anyone know where i can get a bigger/full-sized version of this:

http://images.bit-tech.net/content_images/2012/03/aisearch/honda-asimo-614x250.jpg

would this one do?
http://actuallyidontknowanythingaboutcars.files.wordpress.com/2011/11/291082_10150399764264169_32074004168_8065767_897175744_o.jpg
BentAnat 19th March 2012, 14:46 Quote
Quote:
Originally Posted by Bauul
I don't think we'll get close to mimicking true human intelligence in a computer whilst not really understanding how human intelligence works.

Compared to a computer, we're unbelievably strange and illogical. Many of our decisions are made unconsciously by a section of the brain we have limited control over. Our memories fluctuate and change, and our opinions endlessly shift based on stimulus that we receive in an enormously varied number of ways.

The fact our brains are biological, that they are built of cells that live, die, change and change again means our intelligence is in a constant state of flux. Add to that endless chemical influences from internal and external sources, and the result is so endlessly illogical and random it'd be impossible for a non-biological entity to ever replicate it truly.

I'm not saying you won't ever get intelligent machines, they just won't have the same intelligence as humans.

Agree 100% with this.
There are a few things that are IMHO not simulatable: Emotion and Intuition to name but two.
Computers can pretty much by definition make "cold" decisions quicker and more logically than us (humans), but they lack a human factor that is so very important. Just the speed at which, without stepping through all the logical paths, a human brain can draw a conclusion about something based on nothing but what is describied as "gut feeling" is VERY hard, if not impossible to simulate with typical logic-based decisions.
In pure rationality, though, a computer (especially a self-taught system) would out-do us, though... /2c
XXAOSICXX 19th March 2012, 18:53 Quote
Ahem.. "Most recently, the construction of the Large Hadron Collider hastened rumours that an artificially created black hole could destroy they entire planet"

I think you mean THE entire planet ;)
edzieba 19th March 2012, 19:46 Quote
Quote:
But in 2005, fiction became a reality when Japanese scientists created Asimo (an acronym which stands for Advanced Step in Innovative MObility), a plastic-shelled humanoid standing some four feet tall and capable of recognising objects, faces, hand gestures and speech.
Unless you count "has legs" as a defining characteristic of AI, this happened in the late '90s with Kismet, or even earlier depending on your definition of AI. Which is one of the biggest problems in AI research: the definition of AI, and it;s ever changing nature.

In reality, we have LOTS of AI. We use it all the time. Let me introduce the concept of 'hard AI' and 'soft AI'.

Hard AI is an artificial consciousness. This is immediately made a woolly definition by the choice between an artificial human consciousness (i.e. the only sort of consciousness we have encountered) or any general sort of consciousness, which thus far the definition of which has been hard to pin down.

Soft AI is a learning system. Oh, wait, we have those SLNN things, and they're certainly not really AI so that can't be it.
Soft AI is a system that can learn and produce emergent behaviour that is not programmed in or trained. Oh, wait, the Seven Dwarves and other swarm robots can do that, and they're certainly not really AI, so that can't be it.
Soft AI is a system that can recognise visual stimulus and classify them as known objects. Oh, wait, Google Goggles can do that, and that's certainly not really AI so that can't be it.
Soft AI is a system that can recognise and respond to natural spoken language. Oh, wait, Siri can do that, and that's certainly not really AI so that can't be it.

And so on. The goalposts of 'soft AI' move continuously as the next frontier is reached, achieved, and becomes mundane.


And once again, I must take issue with Sharkey's odd view on machine emotions and machine empathy (specifically, the lack thereof) and of the non-computational nature of the mind.
I'll address the latter first: If the mind is not computational (i.e. cannot be simulated) this implicitly requires you to accept Cartesian Dualism; that there must be an intangible something, a 'soul' or the like, that makes up the mind. This is silly.
On machine emotions: we have emotional machines. A machine can be 'afraid' or 'angry' or similar. We encounter these often: game AI. These aggregate states are machine emotions (e.g. when 'scared' an AI will attempt to hide from a player's sight, where otherwise it might wander looking for 'food', unless hiding would put it in danger in which case it would overcome it;s 'fear' enough to move into the player's sight and flee, before hiding again).
But what about empthy? We can make machines that recognise facial expressions (you can even download openCV and play with it yourself). We can make AIs that 'feel'. Input the detected emotion as an influence on the machine aggregate state (it's emotions) and Voila! You have an empathic machine.


I leave you with a final thought on the 'sentient machine apocalypse': a conscious AI would have a creator, and thus a parent/parents. When we grow up and become independent of our parents, we do not go on a murderous rampage. Why would we? And if not, why would an AI? We would have taught it all it knows, so unless we teach it that Destroy All Humans is good, then it's unlikely to think that way.
thehippoz 19th March 2012, 20:03 Quote
oh man the old ai debate.. of course it could only be thought of by people who have never programmed anything.. programming and the hardware behind it are extensions of the programmer.. it cannot learn or do anything outside that box it's given

to put bluntly- when trying to create an ai out of non-living parts.. you've already lost.. course people with thier head in the clouds will never accept this.. there must be a way for a machine to think and write it's own code..

think of it like this.. when you write a program- and your trying to simulate the thought process.. your missing the one thing we have, that it can never have.. a soul if you will

sure you can have it process input and react to that input.. but to have it actually do something inspired outside of it's programming isn't going to happen.. you could have it act spontaneously with a random number generator here and there but it isn't thinking for itself- true or false, on and off, if then- that's what your dealing with

oh well I'll leave this here.. think can see a reflection of the guy who built this thing in the visor

Image Removed

How many times TheHippoz. You know that stuff like that isn't OK, so why do you do it?
Bakes 20th March 2012, 01:26 Quote
The fact Noel Sharkey was quoted makes this article awesome.

Anyone else remember Robot Wars?

[QUOTE=thehippoz]oh man the old ai debate.. of course it could only be thought of by people who have never programmed anything.. programming and the hardware behind it are extensions of the programmer.. it cannot learn or do anything outside that box it's given

to put bluntly- when trying to create an ai out of non-living parts.. you've already lost.. course people with thier head in the clouds will never accept this.. there must be a way for a machine to think and write it's own code..

think of it like this.. when you write a program- and your trying to simulate the thought process.. your missing the one thing we have, that it can never have.. a soul if you will

sure you can have it process input and react to that input.. but to have it actually do something inspired outside of it's programming isn't going to happen.. you could have it act spontaneously with a random number generator here and there but it isn't thinking for itself- true or false, on and off, if then- that's what your dealing with

oh well I'll leave this here.. think can see a reflection of the guy who built this thing in the visor

image removed - specofdust

Just because something has been created, doesn't mean it can't be intelligent. Computer programs can write their own code, can improve according to what works and what doesn't, if you will. It would seem pretty silly to base your argument around 'BUT ROBOTS DON'T HAVE SOULS'.
Krazeh 20th March 2012, 01:43 Quote
Quote:
Originally Posted by Bakes
Just because something has been created, doesn't mean it can't be intelligent. Computer programs can write their own code, can improve according to what works and what doesn't, if you will. It would seem pretty silly to base your argument around 'BUT ROBOTS DON'T HAVE SOULS'.

Having seen thehippoz's input into topics of a similar nature in SD I think it can safely be said that he doesn't really have much, if any, of a clue when it comes to these sorts of things. You're best just letting him have his say and moving on without acknowledging it tbh.
thehippoz 20th March 2012, 03:43 Quote
bah had this conversation before a long time back.. I came to the conclusion everyone in the field has finally come to- only this was years before

http://forums.bit-tech.net/showpost.php?p=2597022&postcount=20

just because you read it in a science fiction novel or watch movies doesn't make it true.. it would be cool to tell you the truth- but really not much of a chance imo.. now mixing the two is interesting but considering we only use a small portion of our brain as it is.. probably better time spent figuring out how to access all of it instead of going that route..

you guys can believe otherwise.. :p we need people like that anyway- it's like the chupacabras keeps your brain turning when your out in the field late at night.. then you see it get in the car with some dood
edzieba 20th March 2012, 06:02 Quote
Quote:
Originally Posted by thehippoz
oh man the old ai debate.. of course it could only be thought of by people who have never programmed anything.. programming and the hardware behind it are extensions of the programmer.. it cannot learn or do anything outside that box it's given

to put bluntly- when trying to create an ai out of non-living parts.. you've already lost.. course people with thier head in the clouds will never accept this.. there must be a way for a machine to think and write it's own code..
To put it bluntly: you are completely wrong. Not only is it possible to programs to write their own code (polymorphic viruses and other malware will demonstrate this), but code certainly can 'learn and or do anything outside that box it's given'. Take Boids for example: You program an AI with three simply rules of avoiding crowding, aligning with neighbours, and aiming for the average position. These are pretty simple behaviours, but the behaviour that emerges is complex: flocking.
That's not to mention things like SLNNs and learning classifiers. These do bugger all when programmed, you have to train them with input data to get them to classify correctly. Different training data will result in different outputs, for the same code written.
theevilelephant 20th March 2012, 11:46 Quote
Do we have some all singing all dancing AI tech that is equivilent to a person?

Nope.

Are we close to that?

Nope.

Do we have some very impressive techniques that could be perceived as intelligent?

Yup, plenty.

One of the reasons A.I is so complex is that we dont have a definition of what we mean by intelligence, which makes it an even harder goal to achieve. Certainly we have things like the Turing test but that is by no means a definitive test of intelligence in a computer system.
fodder 20th March 2012, 13:47 Quote
Just what I was about to write on, having cogitated over the reply to my last post.

Maybe, what we are trying to think of as AI, is self awareness. Once something is self aware, it becomes more self guided in it's decisions about the world it inhabits. You could program conditions into an AI to cause itself harm, but if it was self aware would it then follow that programming or decide to follow another string of code instead? (I am not a programmer, so that's my take on it).

In terms of the '1's and 0's' being all a computer can do, we are the same. A neurone fires or it doesn't. It may take several other neurones firing at the same time at the synapse to get that neurone to fire, but it is still 'on' or 'off'. Put inhibitory neurones into the equation and you can start to get a system that starts to look analogue despite being 1's and 0's at the operating level.
Tribble 20th March 2012, 16:00 Quote
Quote:
Originally Posted by PingCrosby
My Miele washing machine has more intelligence than some people I know


arh yes the miele washing machine is a bit of engineering genius, well mine is :D;)

Quote:
Originally Posted by thehippoz

but considering we only use a small portion of our brain as it is.. probably better time spent figuring out how to access all of it instead of going that route..

http://www.neatorama.com/2008/09/05/10-most-fascinating-savants-in-the-world/
S1W1 20th March 2012, 17:48 Quote
Very interesting article BT.

I'm shocked that you managed to write the whole thing without mentioning Skynet once :D
Peter Kinnon 20th March 2012, 23:49 Quote
@fodder

" Maybe, what we are trying to think of as AI, is self awareness. Once something is self aware, it becomes more self guided in it's decisions about the world it inhabits."

You have expressed this perfectly! This is the exact nature of consciousness, self awareness that we find in ourselves and, to a far lesser degree in other species. In fact, it is an evolutionary necessity.
Simply the navigational facility which enables an organism to interact optimally with its environment.
in,say, a bacterium it is miniscule, in w worm, tiny. In ourselves, because if our incomparable level of interaction with the "external" worlld it is humongous.

So, even today, computer systems that have sensors that feed back information of the external world and of themselves can be said to have some very small degree of self-awareness, though probably rather less that a bacterium. At the bottom end could even just about include such a device as a thermostat.

However, with respect to AI in general, it is, in my view, not likely to arise to any significant degree from any computer lab but, quite imminently, from the process of self-assembly that can be already seen as a work-in-progress as what we at present call the internet.

Consider this:

There are at present an estimated 2 Billion internet users.
There are an estimated 13 Billion neurons in the human brain.

On this basis for approximation the internet is only one order of magnitude below the brain.

That is a simplification, of course. For example:

Not all users have their own computer. So perhaps we could reduce that, say, tenfold.
The number of switching units, transistors, if you wish, contained by all the computers connecting to the internet and which are more analogous to individual neurons is many orders of magnitude greater than 2 Billion.

Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but can adopt multiple states.

Without even crunching the numbers, we see that we must take seriously the possibility that the internet may well be comparable to a human brain in processing power. And, of course, the degree of interconnection and cross-linking is also growing rapidly.

From a quite different evolutionary perspective we can also see that there is a very good case to be made for this entity to become a new, and predominant, phase of the on-going evolutionary "life" process that is traceable back to the formation of the chemical elements in stars and supernovae.

This broad evolutionary model is outlined, very informally, in "The Goldilocks Effect: What Has Serendipity Ever Done For Us?", a free download in e-book formats from the "Unusual Perspectives" website
djDEATH 21st March 2012, 14:21 Quote
Well, this is a fascinating story, and one that hits me quite close to the heart.

I am currently living in rural Africa, running an internet cafe and teaching centre, and coming to terms with developing attitudes towards technology, and how things slip and slide in and out of public acceptance. In the UK (where I'm from) we have grown up and become accustomed to having "machines" that help us in many ways, and although this isn't AI, the fact that nowadays I can tap a few keys on my phone, and arrive home later to find that the new series of a TV show that was aired in America just a few hours before has been ripped, uploaded, then downloaded, unzipped and placed onto my media centre, and all I have to do is sit in front of the screen and watch it. Just ten years ago this wasn't such a "normal" or feasibly possible endeavour, even for geeks.

Our world is a fabricate one. Even those of us that live "rurally" in Europe are only a maximum of a few kilometres away from a monstrosity of engineering that we call a city or town. This is an entirely frabircated environment, largely automated already, and the machine that drives it is already intelligently making decisions for us all the time, giving us more time to play, waste money and forget what we're really here for.

THe pursuit of happiness drives our wanton need for entertainment and consumerism, yet our own happiness is directly affected by such desires, and forgetting where the money comes from, accessibility to technology has made us ever more lazy and incapable of realising our own potential,.

Put yourselves into my shoes, and I live in an area where people have not grown up with landlines and universally watched TV and radio, and has suddenly been given a mobile phone. Whereas we had decades to build a culture of telephonic use and practice, your average African has a cheap nokia, and can now magically speak to anyone, wherever they are. THis changes their attitude towards the technology, and gives them an opportunity that we have taken for granted and missed completely. The mobile phone to my friends is their main point of contact, but also their email account, their friend list, their facebook access and also more recently their wallet and credit card, all rolled into one. Most don't have a computer at home, or at work (if they are lucky enough to have a job at all). We as westerners have shrugged off the need for such massive integration because we are "scared" of it, or the powers that be have decided that they won't make enough money out of us to risk implementing its adoption. Your average iphone user has no idea how powerful their "toy" is, yet I know people who get far more done with a basic internet-capable 2G handset than your average office type who just wants the new best thing in town and buys an android or iphone or blackberry or ipad, and believes they are living in the future.

We may not have robots yet, but the technology has already taken over our lives, and even when there is little water or sanitation, the farthest reaches of the planet are no longer far reaches, but a single cell-tower away from everything else, and when the button gets pushed, it will affect everyone on the planet equally and indiscriminately.

AI is something that is fairy tale, that is hollywood, that is fictional, but within that vision of a future ridden with robots lies a truth that we are already heading towards, that technology drives us, not the other way around, and perhaps the machine we should be more scared of is our own imagination, or conversely for those of us who believe that arguing over the 50p tax cut is going to save the planet, ultimately will become our own demise as normal people over the world realise that the power resides wtihin them, and an educated man with an internet connection should be feared far more than the politicians or warmongers of the past.

The game of the future is ours, not the machines, but what I will predict is that it won't be london or washington holding the cards....
djDEATH 21st March 2012, 14:26 Quote
Quote:
Originally Posted by S1W1
Very interesting article BT.

I'm shocked that you managed to write the whole thing without mentioning Skynet once :D

or Morpheus and Trinity...
Nexxo 22nd March 2012, 07:56 Quote
Quote:
Originally Posted by thehippoz
oh man the old ai debate.. of course it could only be thought of by people who have never programmed anything.. programming and the hardware behind it are extensions of the programmer.. it cannot learn or do anything outside that box it's given

to put bluntly- when trying to create an ai out of non-living parts.. you've already lost.. course people with thier head in the clouds will never accept this.. there must be a way for a machine to think and write it's own code..

think of it like this.. when you write a program- and your trying to simulate the thought process.. your missing the one thing we have, that it can never have.. a soul if you will

sure you can have it process input and react to that input.. but to have it actually do something inspired outside of it's programming isn't going to happen.. you could have it act spontaneously with a random number generator here and there but it isn't thinking for itself- true or false, on and off, if then- that's what your dealing with

oh well I'll leave this here.. think can see a reflection of the guy who built this thing in the visor

Image Removed

How many times TheHippoz. You know that stuff like that isn't OK, so why do you do it?

Compelling arguments, from someone who in past discussion has shown:
- He does not understand the difficulties in defining "intelligence;
- He does not understand brain function (not that old chesnut of "we only a small part of our brain" again);
- He does not understand how computers and code work;
- He does not understand evolution, chaos theory, nonlinear dynamics, emergence etc.;
- bases his entire argument on the belief that intelligence is the result of a God-given soul and therefore only humans are intelligent. Oh, and you can't clone humans for that reason either.

To emphasise the cogency of his argument he then finishes with an image or video involving monkeys, gay sex and/or penises.

OK, thehippoz, hush now; grown ups are talking. And the next time you post any inappropriate image you'll enjoy a week's suspension, incrementing each time you repeat the offence.
ratibawa 23rd March 2012, 17:19 Quote
That's very nice post and very nice discussion related toThe story of artificial intelligence. I am looking forward for your next post.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums