A prominent AI researcher told an Australian financial journal on Monday that Big Tech executives are misrepresenting the grave danger AI presents to mankind in order to consolidate their market dominance via government regulation.
Andrew Ng, co-founder of Google Brain (now DeepMind) and adjunct professor at Stanford University, told John Davidson of the Australian Financial Review, “There are definitely large tech companies that would rather not have to try to compete with open source, so they’re creating fear of AI leading to human extinction.”
For lobbyists, “it’s been a weapon to argue for legislation that would be very damaging to the open-source community,” he said.
Sam Altman, OpenAI’s CEO, has been vocal on the necessity for government oversight of artificial intelligence. Together with DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei, Altman co-signed a statement from the Center for AI Safety in May that likened the dangers posed by artificial intelligence to those posed by nuclear war or a global epidemic.
A policy analyst at the Center for Data Innovation in Washington, D.C., said, “The idea that AI systems will spiral out of control and make humans extinct is a compelling plotline in sci-fi thrillers, but in the real world, the fear is more of an exaggeration than a likely scenario.”
Prabhakar said that developing AGI, an artificial intelligence that is superior to human intelligence in all domains, still has a long and unclear path ahead.
To constitute an existential danger, he told TechNewsWorld, “even if AGI were realized, which is by no means certain, it would have to go rogue and break free from the control of its human creators,” which is still a very speculative scenario.
He continued by saying that worrying about an AI-caused end of the world is ignoring the technology’s many real advantages. “The gains from AI in fields like health care, education, and economic productivity are enormous and could significantly uplift global living standards,” he stated.
Threatening Open-Source Artificial Intelligence
“They are definitely using scare tactics,” Ala Shaabana, co-founder of the Opentensor Foundation, an organization dedicated to building open and accessible AI technology, said of the Big Tech AI leaders. The potential extinction of the human race is a secondary worry to a great many others. That may be going a bit too far. It’s all about public relations.
According to his interview with TechNewsWorld, “the holy grail of artificial intelligence” is the development of “artificial general intelligence,” or an AI with consciousness that can think for itself and act independently of humans. But how can we create anything aware if we don’t even know what consciousness is?
Rob Enderle, president and lead analyst of the Enderle Group, an advisory services organization in Bend, Oregon, has pointed out that government regulation may represent a danger to the open-source AI community.
For him, “it depends on how the laws and regulations are written,” he told TechNewsWorld. However, governments often do more damage than help, especially in poorly understood sectors.
Overly broad government regulations on AI, Prabhakar said, might provide difficulties for the open-source community.
“Such regulations, especially if they place responsibility on developers of open-source AI systems for how their tools are used, could discourage contributors,” he said. They may be reluctant to openly share their work for fear of legal repercussions if their open-source AI technologies are misappropriated.
Regulatory Red Tape Suffocating Open Source
Prabhakar that the government adopt a more nuanced strategy toward AI regulation. He proposed as a possible solution the idea of making exclusions in the legislation for open-source models, “recognizing that open-source projects have different incentives compared to commercial ones.”
“By adjusting the rules to better fit the open-source spirit, we can aim for a scenario where regulation and open-source innovation coexist and thrive,” he said.
According to Shaabana, clauses in the White House’s Executive Order on Artificial Intelligence announced on Monday favor Big Tech AI corporations against open-source developers.
“The Executive Order is extremely manual,” he said. According to one expert, “it requires a lot of resources to comply with if you’re a small company or small researcher.”
Companies creating AI models will soon be required to disclose their training processes and get official approval from the United States government. You can forget about getting past all that red tape unless you’re a major researcher or a major company like Meta, OpenAI, or Google. Each of those businesses will have a specialized unit to help it weather the storm.
He went on to say that the open-source community isn’t the only one that may lose out if governments started regulating AI.
“The scientific community will also be affected,” he said. Scientists studying anything from global warming to biology, astronomy, and language have turned to open-source AI in the last few of years. The results of this study couldn’t be compiled without the availability of such open-source models.
While rules may hinder tiny, open-source AI players, they can assist the present AI establishment. “Strict regulation on AI creates significant barriers to entry, particularly for emerging ventures lacking the requisite capital or expertise to navigate the many regulatory mandates,” Prabhakar said.
He went on to say that the industry might become consolidated around already-established companies because of the high expenses of complying with laws.
He argued that large technology companies will be better able to adapt to and weather the effects of stricter regulations. In contrast to small and medium-sized enterprises (SMEs), these companies have the resources, knowledge, and support systems to successfully traverse the regulatory landscape. This discrepancy serves as a moat, protecting the current leaders from any direct assault from upstarts.
In May, a document purportedly written by a Google researcher went viral for its discussion of the company’s lack of a moat to protect it against open-source rivals. As the author put it, “a third faction has been quietly eating our lunch” in the form of open-source AI models that are “faster, more customizable, more private, and pound-for-pound more capable” than Google’s and OpenAI’s.
While applauding President Biden’s Executive Order for its goal of integrating AI into government, Shaabana said, “A lot of it looks like Big Tech trying to close the door behind them.”
“They’ve developed this sophisticated AI, and they really don’t want any competition,” he said.
He said that “ironically,” “a lot of the government’s fears about bias, transparency, and anti-competitiveness can all be resolved with open source AI and without regulation.”
What are the consequences if we do nothing and give these corporations complete sway over society? He was confident in his prediction that these entities will proceed to develop their own AI and enrich themselves more by using our data.