Powerful AI : outsmarting humans?
- 3 ins.ide

- Sep 22, 2020
- 3 min read
- By Tanya

Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human-level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years.
Origin of AI
The term artificial intelligence, or the simulation of intelligence in computers or machines, was coined back in 1956, only a decade after the creation of the first electronic digital computers. Hope for the field was initially high, but by the 1970s, when early predictions did not pan out, an “AI winter” set in. Scientists were developing AIs that excelled in specific areas, such as winning at chess, cleaning the kitchen floor, and recognizing human speech. Such “narrow” AIs, as they are called, have superhuman capabilities, but only in their specific areas of dominance.
AI Today
Artificial Intelligence (AI), as a popular topic, has been applied to all areas of modern life and there has been diverse ongoing research focusing on its various aspects. However, the developments of AI in different fields are quite imbalanced. Many experts have predicted that, in time, AI will exceed Human Intelligence (HI); however, there is no solid consensus among them. Moreover, the controversial issue of how to adequately define AI and HI has made it difficult to answer this question.
By exploring the relationship between AI and HI, we will understand the impact of AI on us as human beings and learn more about humanity. Recently, AI has been the most popular topic in the technology field. While the industries are discussing the future of AI and human society, people are also starting to panic at whether they will be replaced by AI someday. People may fear that their private information can be extracted by AI inadvertently or their job positions will be replaced by AI in the future.
There’s no way around it — artificial intelligence is changing human civilization, from how we work to how we travel to how we enforce laws.
As AI technology advances and seeps deeper into our daily lives, its potential to create dangerous situations is becoming more apparent. A Tesla Model 3 owner in California died while using the car’s Autopilot feature. In Arizona, a self-driving Uber vehicle hit and killed a pedestrian (though there was a driver behind the wheel)
Let's see what the experts said upon this very thing:
Stephen Hawking: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Elon Musk: “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.” “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason.” “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.” “AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”
Tim Urban: “And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.”
Conclusion
The follow-up questions, of course, are how far off is that time and can we prevent it? Well, right now, even the experts and those who work in the AI field certainly do not agree. At one extreme we have those like Elon Musk and Stephen Hawking who worry that AI will bring an end to humanity, while at the other end of the spectrum we have those like Mark Zuckerberg who believe AI will improve humanity and don’t foresee any significant risks with AI. While the possibilities for AI applications might surely be endless, if even a few of our time’s top minds and great inventors are disagreeing about the potential for danger, perhaps it’s time to consider this question as something more than just a recycled Hollywood plot.
References :





Comments