Treat ChatGPT and AI Like Bio Weapons, Not Nuclear Bombs


Humans today are developing perhaps the most powerful technology in our history: artificial intelligence. The societal harms of AI — including discrimination, threats to democracy, and the concentration of influence — are already well-documented. Yet leading AI companies are in an arms race to build increasingly powerful AI systems that will escalate these risks at a pace that we have not seen in human history.

As our leaders grapple with how to contain and control AI development and the associated risks, they should consider how regulations and standards have allowed humanity to capitalize on innovations in the past. Regulation and innovation can coexist, and, especially when human lives are at stake, it is imperative that they do.

Nuclear technology provides a cautionary tale. Although nuclear energy is more than 600 times safer than oil in terms of human mortality and capable of enormous output, few countries will touch it because the public met the wrong member of the family first.

We were introduced to nuclear technology in the form of the atom and hydrogen bombs. These weapons, representing the first time in human history that man had developed a technology capable of ending human civilization, were the product of an arms race prioritizing speed and innovation over safety and control. Subsequent failures of adequate safety engineering and risk management — which famously led to the nuclear disasters at Chernobyl and Fukushima — destroyed any chance for widespread acceptance of nuclear power.

Despite the overall risk assessment of nuclear energy remaining highly favorable, and the decades of effort to convince the world of its viability, the word ‘nuclear’ remains tainted. When a technology causes harm in its nascent phases, societal perception and regulatory overreaction can permanently curtail that technology’s potential benefit. Due to a handful of early missteps with nuclear energy, we have been unable to capitalize on its clean, safe power, and carbon neutrality and energy stability remain a pipe dream.

But in some industries, we have gotten it right. Biotechnology is a field incentivized to move quickly: patients are suffering and dying everyday from diseases that lack cures or treatments. Yet the ethos of this research is not to ‘move fast and break things,’ but to innovate as fast and as safely possible. The speed limit of innovation in this field is determined by a system of prohibitions, regulations, ethics, and norms that ensures the wellbeing of society and individuals. It also protects the industry from being crippled by backlash to a catastrophe.

In banning biological weapons at the Biological Weapons Convention during the Cold War, opposing superpowers were able to come together and agree that the creation of these weapons was not in anyone’s best interest. Leaders saw that these uncontrollable, yet highly accessible, technologies should not be treated as a mechanism to win an arms race, but as a threat to humanity itself.

This pause on the biological weapons arms race allowed research to develop at a responsible pace, and scientists and regulators were able to implement strict standards for any new innovation capable of causing human harm. These regulations have not come at the expense of innovation. On the contrary, the scientific community has established a bio-economy, with applications ranging from clean energy to agriculture. During the COVID-19 pandemic, biologists translated a new type of technology, mRNA, into a safe and effective vaccine at a pace unprecedented in human history. When significant harms to individuals and society are on the line, regulation does not impede progress; it enables it.

A recent survey of AI researchers revealed that 36 percent feel that AI could cause nuclear-level catastrophe. Despite this, the government response and the movement towards regulation has been sluggish at best. This pace is no match for the surge in technology adoption, with ChatGPT now exceeding 100 million users.

This landscape of rapidly escalating AI risks led 1800 CEOs and 1500 professors to recently sign a letter calling for a six-month pause on developing even more powerful AI and urgently embark on the process of regulation and risk mitigation. This pause would give the global community time to reduce the harms already caused by AI and to avert potentially catastrophic and irreversible impacts on our society.

As we work towards a risk assessment of AI’s potential harms, the loss of positive potential should be included in the calculus. If we take steps now to develop AI responsibly, we could realize incredible benefits from the technology.

For example, we have already seen glimpses of AI transforming drug discovery and development, improving the quality and cost of health care, and increasing access to doctors and medical treatment. Google’s DeepMind has shown that AI is capable of solving fundamental problems in biology that had long evaded human minds. And research has shown that AI could accelerate the achievement of every one of the UN Sustainable Development Goals, moving humanity towards a future of improved health, equity, prosperity, and peace.

This is a moment for the global community to come together — much like we did fifty years ago at the Biological Weapons Convention — to ensure safe and responsible AI development. If we don’t act soon, we may be dooming a bright future with AI and our own present society along with it.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

Emilia Javorsky, M.D., M.P.H., is a physician-scientist and the Director of Multistakeholder Engagements at the Future of Life Institute, which recently published an open letter advocating for a six-month pause on AI development. She also signed the recent statement warning that AI poses a “risk of extinction” to humanity.



Source link: https://gizmodo.com/ai-chatgpt-biological-weapon-nuclear-bomb-ai-laws-1850496771

Sponsors

spot_img

Latest

Bitcoin Bullish Signal: Whales Accumulate 84,897 BTC

On-chain data shows the Bitcoin whales have accumulated 84,897 BTC during the last five weeks, something that could be bullish for the price. Bitcoin...

AP Psychology ‘effectively banned’ in Florida over sexuality lessons, College Board says

Florida’s education department did not immediately respond to a request for comment. Earlier this summer, the College Board, which administers AP exams, rejected changing...

Dunleavy shares heartfelt message to JP after CP3 trade

Dunleavy shares heartfelt message to JP after CP3 trade originally appeared on NBC Sports Bay AreaWarriors new general manager Mike Dunleavy Jr. on...

Declan Rice’s Arsenal debut and four other highlights from thrashing of MLS Allstars

Declan Rice made his first appearance for Arsenal, as a second-half substitute, and helped finish off a 5-0 victory over the MLS All-Stars...

Ankr (ANKR): Project Review, Recent Developments, Future Events, Community

Ankr (ANKR) is one of the best-performing cryptos, gaining as much as 70% in the...