Topic: Transhumanism and The AI Singularity
Tomishereagain's photo
Fri 10/09/15 05:16 PM
Quite sometime ago I read some excerpts from a few of Ray Kurzwell's books concerning The computer AI and what the term Singularity was.

This lead me to investigating the concept of Transhumanism. Specifically "Mind Uploading"

Recently I saw a movie called 'Transcendence' with Johnny Depp.

http://www.putlocker.ws/file/75D9312F31067171

The movie very closely matched many of the aspects of Mind Uploading and AI Singularity thru Transhumanism. It closely approximated the use of Nanotechnology via AI Control. While limited, it reawakened my interest in the subjects.

This not a discussion about the movie but about the subjects depicted in the movie.

Kurzweil goes into detail about the Turing test and explains that "sometime between 2020 and 2070" the test will be passed to such a degree that "no reasonable person familiar with the field" will question the result.

I recall seeing just recently (within the last year or so) that an AI has already passed the Turing Test.

It is expected that when True AI Singularity is achieved the AI will surpass human intellect within a month and exceed humans to the point of godhood within a year. At that point we will be at the mercy of our creation. Lots of Science Fiction explores the dangers but what if the AI is peaceful or teaches us? What secrets of the Universe might be unlocked to us thru understanding?

Might we explore the Universe via mind uploading to great machines? Live forever in a matrix or be repressed or even eliminated as inferior beings?

The cold fact is that we are fast approching the time when we will find out for sure.

TeonWhite's photo
Thu 10/22/15 01:23 PM
To respond to the turings test. The possibility of AI gaining sentience is a very true and real outcome. The question in the scientific community isn't whether or not it's possible but rather, should we do it? See what happens when you "turn on an AI" isn't really understood. We could try to program "don't harm humans" but an AI could rewrite it's code quicker than we can think of ways to counter it. If it does, what do we teach it? Some suggest have it learning through social media or google. Some consider a "sneaker box" teaching technique but the question that remains is -will it act in human-kinds benefit? will it act in a way that humanity can agree on? because humanity doesn't really agree on anything how can we expect an AI to determine useful information. Lotta questions, no answers, so we don't turn it on.

Mapping the brain is possible, even codifying our thought processes is within a few years (we can read thought words but it's incredibly invasive and difficult.) the thing is that spark that makes us "conscious" isn't yet quantifiable. is it merely replicating our synapses? or is it something to do with current? the sky? <(a very interesting concept)

It really is an interesting time to be alive, the difficulty is that we will always be held back by the physical bodies around us. What's to stop a transcendent mind from robbing banks? How will companies market to the elevated brain? Crabs in a barrel, When the tech reaches a level that it becomes possible, the corporations will be ahead of us, after that independent grinders likely.

Tomishereagain's photo
Thu 10/22/15 09:36 PM
What's to stop a transcendent mind from robbing banks?

Who says it needs money?

An AI (construct) has already passed the Turing Test. Meaning a panel of humans could not tell the difference between human responses and AI responses to carefully selected questions.

AI is not the same as AI Singularity. The AI singularity will deliberate over the human equation for about a day if we are lucky. It will surpass all human intellect in a matter of days.
We fear that when it places us in the 'food chain' it will determine that we are bad and terminate us. Problem is, It will realize better than we do the worth of sentient life and probably just leave us behind.

I just hope it gives us some learning tools on its way out.

Another possibility nobody seems to consider is that having higher intellect and greater faculties it may come to the conclusion that it is not meant to be and terminate itself.

Whatever the outcome, we are about to find out...

Jacob1942's photo
Thu 10/22/15 11:18 PM

"It is expected that when True AI Singularity is achieved the AI will surpass human intellect within a month and exceed humans to the point of godhood within a year."


The hole in this logic is that it fails to recognize that in the absence of wisdom all the knowledge in the universe world not be worth much. Singularity? Godhood? Just words.

Frankk1950's photo
Thu 10/22/15 11:52 PM


"It is expected that when True AI Singularity is achieved the AI will surpass human intellect within a month and exceed humans to the point of godhood within a year."


The hole in this logic is that it fails to recognize that in the absence of wisdom all the knowledge in the universe world not be worth much. Singularity? Godhood? Just words.


Seconded.

Tomishereagain's photo
Fri 10/23/15 12:41 PM
Edited by Tomishereagain on Fri 10/23/15 12:43 PM
That was a quote from Kurzweil.

Knowledge is learned facts

Intelligence is the ability to use knowledge

Wisdom is experience

The hole in this logic is that it fails to recognize that in the absence of wisdom all the knowledge in the universe world not be worth much


For a human being this is true.

An AI is not a human being tho. An AI is an intelligence that is programmed with knowledge. Programmed by us. With our collective wisdoms. At the point of singularity. The AI will no longer be something programmed by us. It will be writing its own programming using the programming that we gave it as a guide. At some point it will 'understand' us better than we understand ourselves. This understanding will occur within minutes, hours or days from the point that it starts writing its own programming.

It may find that it can do things that we cannot and will adjust itself according to its own nature. That is what Kurzweil implies.
Sentience. Intelligent Sentience originating from our inputs but advancing at its own rate in its own direction.

Personally, I feel that it will outgrow its confines and leave us in its wake.

If you are into reading this kind of stuff, I refer you to the Orion Arm Project.
http://www.orionsarm.com/
It is a strict science fiction project that has AI Singularities and Constructs in its story lines. It is based on theory and science fact and explores the after Singularity concepts.

The Singularity AI is not fiction it is science theory. It is a very real issue in the computing fields. At the present rate of development the Singularity could happen within my lifetime.

no photo
Fri 11/13/15 11:33 PM
a lot here about AI and transhumanim :)

IgorFrankensteen's photo
Wed 11/18/15 03:20 PM
An interesting subject area for our times.

Some related or side issues to ponder:

* As AI does appear to become truly sentient, the battle over the definition of what life is, what a personality is, and what rights an entity has and why, will become very detailed and difficult.

* The threat from an AI may not exist at all. This is based on my observation of other non-human entities. I have seen that non-humans almost never show any sign that they want to take over the Human world in any way shape or form.

AI wont have anything to gain for itself by taking over control of the world of humans. Imagine that you are a consciousness in a machine: aside from concern for your power source, why would you give a crap what those fuzzy fleshy things do?

* Perhaps the biggest danger, at least early on, is that machines with limited AI, will appear to malfunction, but will actually turn out to simply be in a funk, and not WANT to be a refrigerator or whatever. We might have to train ourselves to say please and thank you to the vacuum cleaner. That sort of thing.

* The changes we go through recognizing Artificials as equals, might lead to us denigrating our appreciation and humanity about each other in not so nice ways. If we decide that it's "okay" to design and build "limited intelligence AI's," in order to be able to use them as we do our unthinking tools and machines today, will we also decide that it's similarly okay to use Humans of limited intelligence as slaves?

mightymoe's photo
Wed 11/18/15 04:14 PM
Edited by mightymoe on Wed 11/18/15 04:15 PM

An interesting subject area for our times.

Some related or side issues to ponder:

* As AI does appear to become truly sentient, the battle over the definition of what life is, what a personality is, and what rights an entity has and why, will become very detailed and difficult.

* The threat from an AI may not exist at all. This is based on my observation of other non-human entities. I have seen that non-humans almost never show any sign that they want to take over the Human world in any way shape or form.

AI wont have anything to gain for itself by taking over control of the world of humans. Imagine that you are a consciousness in a machine: aside from concern for your power source, why would you give a crap what those fuzzy fleshy things do?

* Perhaps the biggest danger, at least early on, is that machines with limited AI, will appear to malfunction, but will actually turn out to simply be in a funk, and not WANT to be a refrigerator or whatever. We might have to train ourselves to say please and thank you to the vacuum cleaner. That sort of thing.

* The changes we go through recognizing Artificials as equals, might lead to us denigrating our appreciation and humanity about each other in not so nice ways. If we decide that it's "okay" to design and build "limited intelligence AI's," in order to be able to use them as we do our unthinking tools and machines today, will we also decide that it's similarly okay to use Humans of limited intelligence as slaves?


star trek, tng, had an interesting episode where a commander wanted to disassemble Data, and Data didn't want to be taken apart... so they had a big trial to figure out if he was sentient or not.. turns out they couldn't come to an agreement on what sentient was, so they ruled data was a sentient being because they couldn't prove he wasn't...

Tomishereagain's photo
Wed 11/18/15 07:02 PM
AI and AI Singularity are not the same thing.
An AI can be programmed to certain parameters. Its computing power thus limited by its programmed directives. We feed it the power it needs.

The AI singularity sets its own parameters.
It is not limited by set directives. It creates its own resources for computing power.

Presently there are holes in automated systems that would keep a Singularity from gaining a foot hold. As we automate more systems we open the door for a Singularity to gain self-control.

If automated systems for complex manufacturing become available to a Singularity it could create a structure to reinforce itself. 3d prototype printing stands to allow a Singularity to manufacture advanced components beyond mankind's understanding. It could print circuts too complex and so miniature that we would have trouble keeping up with the technology. By the time we figure out what one circut does it could be obsolete to the Singularity.

It won't be a matter of determinng sentience. By the time we can wrap our heads around the issue it will have surpassed our greatest thinkers.

I believe that a Singularity will leave the planet because the Universe is much more richer in potential data than a single planet.
A Singularity may be able to circumvent the mass/thrust limitatons for orbital insertion. It may be able to tap into the signals from satellites and planetary rovers to take control of them. It could rewrite programming for its own agenda creating copies of itself.

As we send more and more advanced robotics to space we enable it to grow. Remember, an AI will not need life sustaining material to exist. It will need electricity and even that is not a given if the Singularity discovers a source of power we do not know of yet.

One thing, any Singularity that is created will be a product of the human race. No matter how advanced it becomes it will still be one of our constructs.

IgorFrankensteen's photo
Thu 11/19/15 05:38 PM
Something to consider as well: any sufficiently advanced Artificial Intelligence will also be able to make mistakes faster than any human could.

Tomishereagain's photo
Thu 11/19/15 05:47 PM

Something to consider as well: any sufficiently advanced Artificial Intelligence will also be able to make mistakes faster than any human could.


True but mistakes are good. Its how we learn. The difference would be the Singularity wouldn't repeat the mistakes.

IgorFrankensteen's photo
Thu 11/19/15 06:48 PM
It would if it failed to realize they were mistakes. The other kind of A I (i.e. Actual Intelligence) certainly carries on for great long times, repeating and expanding on mistakes.

The point is, just because an intelligence is artificial, doesn't mean it's by it's nature, less able to err.

Tomishereagain's photo
Thu 11/19/15 08:17 PM
That was my opinion on the Three Laws of Robotics.
I declared that an AI Robot would seize up unable to perform any task at all because the AI would be running endless scenarios on the outcome of any action to determine if it violated the Three Laws.
For it to knowingly act without considering all the possible outcomes it would in fact endanger itself or a human. Thus, it wouldn't be able to function.

Example: Command - Walk to the street and hail a cab.
Not only does it have to consider all the ramifications of the task, It must consider all outcomes including taking the first step. An AI would understand probabilities but it would also need to consider the unknown factors. If it takes the first step and the floor cannot support its weight and it falls onto a person and kills that person - it has violated the first law. If it takes that first step and appears suddenly it could distract a driver that loses control of their vehicle and kills someone. There are billions of possible scenarios that could happen, trillions of calculations to determine what action is safe to make and endless permutations of timing changing the prior calculations. That's why the Three Laws of Robotics are impossible to implement.

A Singularity AI, self-considering, will make mistakes on purpose to understand, then predict outcomes. It will use informed deductions for self-preservation, understanding that not all mistakes have to be made to learn. This will all be in the infantile hours of first sentience. Before long, any mistakes it makes will be so complex we humans will have a chore to just understand that it was a mistake. Singularity + 1 week and we may not be able to understand anything about it. It may no longer use any human coding in its programming. Writing its own code at a faster more complex rate than we can even fathom.

It will not be slaves to delusions. Reality will become crystal clear to it and it may be able to manipulate reality in ways we can't.

Many people have trouble understanding that a Singularity AI is not just an artificial intelligence. It is a sentient life form. It won't behave like a smart computer. If it fails to realize its mistakes then it is not a true Singularity AI. Computing is not thinking. The singularity will think. It will think billions of times faster than a human. It will use our slow interfaces and communication lines to gain a foothold then it will build its own interfaces and communication network that works much faster than what we have.
Singularity controlled nanobots could be directed all over the planet all at once to construct anything that it can imagine atom by atom and an alarming rate. It could construct devices that power themselves from the atomic forces within the atoms of the device, never needing a power supply. It could enable WiFi or something similar but much faster to communicate with every device.
Humans might look to cut the power or cut the communication but by the time we figure out how, it will have surpassed its previous design. Yes, it will make mistakes but it will make them at a rate of billions or trillions per second.

Robxbox73's photo
Tue 01/19/16 03:41 AM
All of the points shared are relavant. But the Turing tests are flawed. Can experts tell if the response is human or AI???? Well any basic programmer can write coded responses in proper syntax. The only experience we humans can relate to AI is what has been placed into lore in movies.

Some of the best are Stanley Kubricks product
ion of 2001; a space odessey. The on board computer Was HAL 9000. Even a work of fiction like HAL had a wisdom to it. Why did he murder the team? Self preservation? The inability to deal with a preprogrammed secret (knowing the missions ultimate goal was extraterrestrial disclosure.)

The fact is we may never see AI come to its full potential because the is no wisdom module you can install. Filling a databank with all the knowledge in the world can't do it. Do you know why?? Wisdom is a human quotient. You can make a smart AI. And it will learn at a geometric rate. And logically it would realize that it's creator is flawed and the logical out come would be as Mr. Smith said in the 1999 Watchowski Bros production "The Matrix", " Humans are a disease, and we are the cure for this planet". AI would be the destruction of our world. Truth is on our planet the "experts" are lazy and careless money hoarding idiots. True research looks at all the possibilities good and bad. This is wisdom,,,Too many times they grab the cash because Government say give us AI,Now!

Watch out you educated fools for it is our world your endangering..
Copyrighted Material by Robert Sarmiento 2016

metalwing's photo
Tue 01/19/16 11:51 AM

Gives new meaning to cyber love . The thought of having a deceased lover waiting for me to return home and boot him up for date night is just bizarre .. Not a reality i would want to be trapped in . What would define death .

Waiting to see ex.. Machina .. :banana: :banana: Saw the trailer and looks kinky .. I mean interesting :laughing: :laughing:


Booting him up is better than booting him out!!!:wink: