The computer scientists who helped lay the foundation for today’s AI technology warn of its dangers, but that doesn’t mean they agree on what those dangers are and how to prevent them.
Smart things can outsmart us:
Human survival is at stake if “smart things can outsmart us,” self-proclaimed godfather of artificial intelligence Geoffrey Hinton said Wednesday at a lecture at the Massachusetts Institute of Technology.
AI pioneer Geoffrey Hinton has announced that he is leaving Google on a part-time basis so he can more freely voice his concerns about the rapidly evolving technology.
Threat to humanity:
Hinton fears that future versions of this technology pose a real threat to humanity. The idea is that these things can be smarter than humans. However, most people thought that this would happen in the distant future. And I thought it was far away. He thought it would be 30 to 50 years, maybe even longer. Hinton doesn’t believe that anymore. He believes that Google acted responsibly in developing AI, but needed to let the company have its say.
Red panic button:
Think about that for a moment. Think about it. The annihilation of humanity from planet Earth. That’s why key industry leaders are feverishly ringing alarm bells. These technologists and scientists continue to press the red panic button and do everything in their power to warn of the potential threats artificial intelligence poses to the very existence of civilization.
Hundreds of prominent AI scientists, researchers, and other researchers – including Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of Google DeepMind – reiterated their deep concern for the future of humanity by signing an open letter to the audience, the aimed to endanger humanity with rapidly evolving technology. It’s clear.
Reducing the risk of extinction caused by AI should be a global priority alongside other societal threats.
There is nothing simpler and more urgent. Industry leaders are warning that the coming AI revolution must be taken as seriously as the threat of nuclear war. They urge politicians to erect barriers and pass basic laws to destroy primitive technology before it’s too late.
There are many “significant and urgent threats to AI” and not just extinction; for example, systemic bias, disinformation, abuse, cyber-attacks, and weapons.
Yet the ominous message these pundits are desperately trying to convey to the public just doesn’t seem to transcend the noise of everyday life. AI experts may be ringing alarm bells, but the level of concern – and in some cases sheer terror – they feel about this technology is not conveyed to the masses by the media with the same urgency.
We want artificial intelligence that enriches our lives. AI works for people, works for the benefit of people, helps us cure cancer, and finds solutions for the climate. We can do it. We can apply artificial intelligence and research labs to specific applications that contribute to those areas. But if we embark on an arms race to make AI available to everyone on the planet as quickly as possible and with as little testing as possible, the equation will not end well.
One thing is clear: the techniques we develop can be used for a multitude of benefits that will benefit hundreds of millions of people.