Will superintelligence surpass us?
Will AI take over the world?
Right from the 1936 Charlie Chaplin’s film ‘Modern times’, people have been skeptical of machines.
Would the robots overpower us?
Would AI take over job opportunities?
Would we be overthrown by AI?
Are there risks in artificial intelligence?
Even with the extinction of animals, the climate changes, the temperature changes, corals dying indicating that climate change is REAL we are acting poorly.
It would be long before we realize that we should have decided to mitigate the problem sooner. The same case could happen to us regarding AI. Wrapped up in the comfort and convenience provided by AI, we would be too late to realize the dangers it puts forth. One of the risks that AI poses are job loss. The small and medium-sized businesses wouldn’t be able to afford or license the new age technology of AI or robotics.
AI could turn against us?
Access to wider resources could drive a machine to reach out more and take over the world as a rational means to attaining its ultimate goal. This could also mean that it would prevent the other agents from becoming a block for the machine to achieve its objectives.
AI could one day be the greatest achievement in human history but it could also direct the human race to an end unless the precautionary steps are taken. Physicist Stephen Hawking, Microsoft founder bill gates, and SpaceX founder Elon Musk have expressed their apprehension towards AI being so capable that it goes out of the hands.
Future of life’s institute.
January 2015, Stephen Hawking, Nick Bostrom, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, joined in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence.
The FLI is a non-profit organization that works to eradicate the existential risks that humanity may face or are facing by superintelligence.
If superintelligence went out of hands?
Superintelligence is a powerful optimization process. The superintelligence if given an objective for x, must have its definition for x laid out correctly as the self-programming could lead it to modify itself, and cost an intelligence explosion.
Nick Bostrom, a philosopher defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans.
Nick Bostrom on TED talks mentioned that if an AI has the objective of making us smile then when its weak it could do something that can make us smile. However, when AI is super intelligent it could realize that there is a more effective way of making people smile; sticking electrodes into the facial muscles of humans to make them smile a bright one.
He continues to say that, people might suggest just switching off the AI in case it does something that we don’t approve of but that is not certainly an easy task to do if we are completely dependent on the system, the switch button wouldn’t exist. Similar to the internet where is the switch off button for the internet?
He suggests that if the risks are taken care of well in advance, we could look back in time and say that the right thing we did was contemplating the upcoming reality.
Friendly AI, possible?
Friendly AI needs to have an optimization of the values that humans cherish. Its actions are supposed to be what the human race approves of. The complexity of creating a friendly AI is arduous compared to an unfriendly AI where the goal is to just achieve its objective no matter the consequences.
Evolutionary psychologist Steven Pinker argues that super-intelligent machines can peacefully coexist with the human race.
AI researcher, Steve Omohundro comments that unless the creator programs the machine into doing destructible, the human race can never be threatened by a cybernetic revolt.
The fear of super-intelligent machines taking over us can be considered as a precautionary thought because of the history of human rifles and rebelling nature that we think anything intelligent than us could rebel and take over the planet.
Mr. Omohundro does not cover up the fact that if AI systems could interact and evolve as in able to modify its programming, then the AI’s self-preservation could have conflicting scenarios with the human race.
AI taking over the world, farfetched concern?
Humans have multifaceted rational abilities that only a narrow silver line is what the machines possess. To say that machines learn is like saying that baby penguins know how to fish. The reality is, adult penguins swim, capture fish, digest it, regurgitate into their beaks, and place morsels into their children’s mouths. AI is likewise being spoon-fed by human scientists and engineers, remarks Oren Etzioni
Alan Turing test
In 1950, Alan Turing invented a test which theorizes that when a person can’t distinguish conversing with a human from conversing with the computer is when the sign is vivid that AI has arrived.
Up until now, no robot has managed to pass this test. Nevertheless, the nearest to pass the test was a computer program called Eugene Goostman which simulates a 13-year-old Ukrainian boy at an event organized by the University of Reading. Eugene convinced 33% of the judges at the Royal Society in London that it was human on 7th June.
For AI to replicate human thinking, it is still a long journey.