Can Artificially Intelligent Machines be a Threat?



Yes, according to Stephen Hawking, and the reason? They can become too clever.


In his first 'Ask Me Anything' session on #Reddit, #Hawking warned #humanity that the way #artificial intelligence is being developed does matter, as it can become so competent in what it does that it can kill us by accident if its goals aren't aligned with ours...


“You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants”, says Hawking.


As our own intelligence is no limit on that of the things we create, "we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents", we may face an "intelligence explosion", as machines develop the ability to make themselves increasingly smarter, which will result in "machines whose intelligence exceeds ours by more than ours exceeds that of snails", explains Stephen Hawking.


What can we do to make our #future #relationship with machines a #healthy and #peaceful one instead of having a 'Terminator-style' harsh conflict with them?


"AI can be either the best or the worst thing ever to happen to humanity. As such we should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence", suggests Hawking; adding that "we should start researching this today rather than the night before the first strong AI is switched on."



Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Instagram
  • Facebook
  • Twitter