Charles Pegge
30-04-2010, 06:22
Memristors
Nanoscale Memory & Computing
http://www.youtube.com/watch?v=wZAHG3COYYA&NR=1
http://www.youtube.com/watch?v=rvA5r4LtVnc&NR=1&feature=fvwp
http://www.youtube.com/watch?v=kUOekeiqihc
http://nextbigfuture.com/2010/04/memristor-test-chips-on-standard-300-mm.html
http://www.notebookcheck.net/Newsentry.153+M5bc7d4baba5.0.html
http://www.ieeeghn.org/wiki/index.php/Memristor
danbaron
30-04-2010, 08:05
[font=courier new][size=8pt]I guess it means electronic memory that does not need a continuous applied voltage.
You can turn the power off, and when you turn it back on, all of your data is still there.
As far as I know, current solid state "drives", need batteries. If the battery fails, then, "Bye-bye data.".
It's not clear to me how long a memristor will remember its last state. I don't think it could be forever. It seems to me that entropy must apply to memristors, like it does to everything else.
Anyway, it seems that, hard drives are on the endangered species' list.
---------------------------------------------
How long before researchers turn on their latest super computer, and it reaches up and strangles them before they know what happened? :twisted:
If I remember correctly, Frankenstein became alive, when the electric "life force" from the lightning, surged through his body. He was initially docile. People viewed him as a monster, treated him that way, and so he became what they expected. :x (It's a great story, that's why we still know about it. It deals with the reality of basic human nature, mixing magic, science, fearful superstition, brutality, and the rage of the mob. --> What constitutes the life force? Is re-animation possible? If you don't understand something, then err on the safe side, and destroy it. If someone is different, then unleash your generalized hatred of existence upon him. A person acting like a monster, only because the true monsters treat him like one. Humans unleashing forces which they then unwittingly use to destroy themselves. The created killing the creator, maybe an analogy to patricide.)
Imagine a future computer, being like a super-intelligent Frankenstein. In this instance, when "he" is activated, he accesses the situation in the first 10 ^ -500000 seconds (t = 1 * 10 ^ -500000) of his existence, decides that humans are unreliable and untrustworthy, and uses his super brain to extinguish all of humanity, at time, t = 2 * 10 ^ -500000. :oops:
Charles Pegge
01-05-2010, 16:27
This reminds me of the Singularity. Since intelligence is virtually impossible to define, and AI prospects have been wildly optimistic in the past, you may regard the following with a sceptical eye :)
http://en.wikipedia.org/wiki/Technological_singularity
Paradigm shift
http://www.youtube.com/watch?v=zheIZWr9wVQ&feature=related
http://www.singularity.com/themovie/index.php
http://thesingularityfilm.com/
Ray Kurzweil
http://www.youtube.com/watch?v=cc5gIj3jz44
http://www.youtube.com/watch?v=vGCgrdJwv5w&NR=1
http://www.youtube.com/watch?v=ROJ8Zj4Xp_Q&NR=1
http://www.youtube.com/watch?v=w_iPsfCZtuQ&NR=1
danbaron
02-05-2010, 06:59
[font=courier new][size=8pt]I'll guess what you mean before I look at the article, Charles. Then, we can see if I am correct. Unfortunately, I don't know of any
way to prove that I have not looked at the article. So, you will have to make your own judgment.
The slope of the curve of technological progress (y axis) versus time (x axis) keeps increasing. If, and when the slope reaches 1/0 (the front of the curve
becomes a vertical line), then that would be the singular point (in time). An infinity of progress would occur without the passage of any time. To me, that is
an idealized situation. In any actual system, there are various brakes - the curve can never become vertical. Some physical analogies that I can think of -
within the earth's atmosphere, the air's braking force is proportional to the square of a vehicle's velocity - it is impossible to make an engine with
efficiency 1.0, due to the second law of thermodynamics - it is impossible for anything with mass to reach the speed of light, because f = ma, and the mass (m),
according to Einstein, would become infinite, and so, the force (f) required to push that mass would also have to be infinite, which is impossible - it is
impossible for the population of any species to become infinite, for a myriad of reasons. The actual curve of technological progress (what a frustration it
seems to me to have to describe it in words instead of just being able to draw it) started out with a slope almost equal to zero in the days of the
Neanderthals. It increased gradually until the industrial revolution, when its curvature began to increase. Now, its slope is probably close to as great as it
will get. I expect that sometime in the future, the curve will reach an inflection point (zero curvature), and then the slope will begin to decrease. Once the
slope starts to decrease, I think it will do so forever. It will asymptotically approach a horizontal line, which represents the upper limit of the human
capacity to understand. (If humans can make machines which are smarter than humans (not just faster calculators), then that would seemingly invalidate my
theory. But, I don't know if that is possible. If it is possible, then there would seem to be no apparent upper limit on the understanding of human spawned
machines. Because, then the machines which humans made, which were smarter than humans, could make machines which were smarter than themselves, and so on (that
is what I was thinking about concerning the Frankenstein computer - it could be dangerous to build something which is conscious and smarter than you are).)
Anyway, according to my idea, the entire curve of progress versus time will look sort of like a mathematical integral sign with both ends extended.
Now, I'll look at Wikipedia.
OK, I looked. People have thought about it, maybe I inadvertently absorbed some of their ideas. It's scary. But, that won't stop people. I think that most
people, including scientists, care more about gratifying their egos than they do about humanity's future. If you can become famous now for doing something, or
inventing something, then who cares if it causes human extinction in the future? (Just like corporations don't care about their countries, only their profits.)
Science, as a whole (considered like a human mob), cares nothing about ethics and morality. The overall attitude is, "If I don't do it, someone else will.".
And, who can overestimate the blinding force of ambition? Right now, science is like the Wild West - it's every man for himself. Stephen Hawking was just on CNN
saying that it could be dangerous if an alien race finds out about our existence. But, if I am correct, for years scientists have been transmitting directed
beams of electromagnetic waves into space, in the hope of signaling ETs. As far as I know, those scientists didn't ask humanity to vote on their idea before
they did it. I guess you could say the same thing about CERN. Apparently, Earth will not be swallowed by a black hole, but who knows what the risk was?, I
don't. We should accept whatever a scientist tells us? Some would argue that a scientist would never risk doing something that he and his family could be killed
by - I don't believe that for one second. It seems to me, that as time goes on, the power of scientific experiments and technological implementations, keeps
increasing, and therefore, the risks do too. Most likely, it is not sensible to continue to permit scientists and technologists to do whatever they want. But,
on the other hand, how can they be regulated, by whom, and according to what criteria? If a scientist was going to do an experiment during which it was
calculated that there would be a 1% chance of the Earth spiraling into the Sun, then I bet most people would prefer if he did not perform it. But, you had
better monitor him closely. Because, he might do it anyway, when no one is watching.
:oops: :x :unguee: :P