View Full Version : Computer Programs going amuk
marcuslee
20-11-2009, 23:04
Several months back (that's how behind I am) Kent refered to computer programs programming themselves like in Terminator.
Here's the thread: http://community.thinbasic.com/index.php?topic=2596.0
My question for discussion is: What is the likelyhood that a computer program could be designed to make sgnificant changes to itself, to learn better ways to program itself?
Mark
P.S. - Our son, Scott, is now 6 1/2 months old.
Mark, it has been 6.5 months already, wow!
Well it is already happening. I watched a show about robots on Scientific American Frontiers. They are learning by watching as children do now. In fact, they used real Mom's with their kids and video taped their interactions while teaching simple things to babies. Then they played the videos to the robot and he learned from watching.
Petr, told me that on viewers in the US could watch these videos :(
http://www.hulu.com/watch/23328/scientific-american-frontiers-robot-pals?c=News-and-Information#s-p1-so-i0
marcuslee
21-11-2009, 05:04
I watched the first part of the episode you posted from HULU. Then, I had to get off. Anyway, it was pretty cool how the robots had been taught to learn new things. Once again, science goes to nature to learn something new.
The question still remains. What is the likelyhood that a program would decide on its own to turn evil? Whether that is possible in the future, I have no idea. But, right now, I think that even the smartest program couldn't do bad unless it was given the ability to learn bad. Plus, if they did go to the dark side, I don't think it would take John Conner and the Resistance to take them down. At least not in my lifetime.
What do y'all think?
Mark
Charles Pegge
21-11-2009, 17:58
Spike Milligan's Daleks:
http://www.youtube.com/watch?v=C0n88tZQc4Q
Petr Schreiber
21-11-2009, 18:09
:lol:
"A: Have you done your homework?
B: I have destroyed it"
This approach seems good, I will try it on the university.
Well expect unexpected from the future. The robotics are going forward fast.
I do not think the machines would convert to evil (not unless firmware 2.0), but their clumsiness could pose some danger.
Michael Hartlef
21-11-2009, 18:35
I can imagine that there will be self and reprogramming technology in the future, if it is not allready available.
Being evil is something that depends on the situation. For an example, shooting a personcan be appropriated in one situation (stopping the intruder) and in the other something you consider evil (shooting someone who wants to shut the robot down). For this a computer would need the ability to have feelings and react not only on the environment by programmed logic. It has to have the ability to react unrational. Just like humans. It has to have needs, which it wants to fullfill, no matter how. If scientist will be able to develop something like this, then yes, it could happen.
Charles Pegge
21-11-2009, 20:19
P.W. Singer: Military robots and the future of war
http://www.youtube.com/watch?v=M1pr683SYFk
(16 mins illustrated lecture)
MouseTrap
21-11-2009, 21:48
This reminds me of 'The Singularity' First postulated in 1965.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The danger in the idea is that a super intelligent computer would be like a double black box, Humans would be powerless to understand how it functions and also unable to trust how the machine views humanity.
Last year I attended the 'Singularty Summit' in california. I left with the impression that barriers to the singularity (or even humal level intelligence in machines) are just too massive.
Its a nice thought experiment and leads to some stimulating discussion, but the way modern computers operate and the state of modern A.I. tells me that its chance of happing is just below that of the zombie apocalypse.
marcuslee
21-11-2009, 22:37
Its a nice thought experiment and leads to some stimulating discussion, but the way modern computers operate and the state of modern A.I. tells me that its chance of happing is just below that of the zombie apocalypse.
Yeah, I'm not too worried about it. If it were ever to happen, it wouldn't be in our life time. If a machine revolution were to take place, machines would have to have the ability to replicate themselves. This might be possible even now, but only with human instruction and supervision.
For this a computer would need the ability to have feelings and react not only on the environment by programmed logic. It has to have the ability to react unrational. Just like humans. It has to have needs, which it wants to fullfill, no matter how. If scientist will be able to develop something like this, then yes, it could happen.
And, since a human wouldn't likely program it to develop those feelings against humans, it would probably be a nasty side effect to some feature of the program meant for something else. But, I think feelings go well beyond the passing of messages from one place to another. I think it is something that we can't teach a machine to have. And, if we do somehow teach them that in the future, hopefully safeguards are put into place to not make machines our enemies. Don't you love SciFi?
Mark
Petr Schreiber
23-11-2009, 00:05
Charles,
thank you a lot for the second presentation, that with war robots. I finally found time to watch it from start to end. It was really worth watching and very serious questions and points have been made. Thanks.
Charles Pegge
24-11-2009, 14:00
The Human Factor:
http://www.youtube.com/watch?v=eSsQI_7vc8w&feature=related
CGI Animation / Reconstruction of final episode: The Evil of the Daleks ~1967