Skip to content

In an AI arms race set off by ChatGPT, ethics may be the first casualty

Illustration: Allie Carl/Axios

As the tech world embraces ChatGPT and other generative AI programs, the industry’s longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures.

Why it matters: Once again, tech’s leaders are playing a game of “build fast and ask questions later” with a new technology that’s likely to spark profound changes in society.

  • Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.

Quick catch up: While machine learning and related AI techniques hatched in labs over the last decade, scholars and critics sounded alarms about potential harms the technology could promote, including misinformation, bias, hate speech and harassment, loss of privacy and fraud.

  • In response, companies made reassuring statements about their commitment to ethics reviews and bias screening.
  • High-profile missteps — like Microsoft Research’s 2016 “Tay” Twitterbot, which got easily prompted to repeat offensive and racist statements — made tech giants reluctant to push their most advanced AI pilots out into the world.

Yes, but: Smaller companies and startups have much less at risk, financially and reputationally.

  • That explains why it was OpenAI — a relatively small maverick entrant in the field — rather than Google or Meta that kicked off the current generative-AI frenzy with the release of ChatGPT late last year.
  • Both companies have announced multiple generative-AI research projects, and many observers believe they’ve developed tools internally that meet or exceed ChatGPT’s abilities — but have not unveiled them for fear of offense or liability.

ChatGPT “is nothing revolutionary,” and other companies have matched it, Meta chief AI scientist Yann LeCun said recently.

  • In September, Meta announced its Make-a-Video tool, which generates videos from text prompts. And in November, the company released a demo of a generative AI for scientific research called Galactica.
  • But Meta took Galactica down after three days of scorching criticism from scholars that it generated unreliable information.

What’s next: Whatever restraint giants like Google and Meta have shown to date could now erode as they seek to demonstrate that they haven’t fallen behind.

How it works: The dynamics of both startup capitalism and Silicon Valley techno-optimism create potent incentives for firms to ship new products first and worry about their social impact later.

  • In the AI ​​image-generator market, OpenAI’s popular Dall-E 2 program came with some built-in guardrails to try to head off abuse. But then a smaller rival, Stable Diffusion, came along and stole Dall-E’s thunder by offering a similar service with many fewer limits.
  • Meanwhile, the US government’s slow pace and limited capacity to produce legislation means it rarely keeps ahead of new technology. In the case of AI, the government is almost entirely in the “making voluntary recommendations” stage right now.

Be smart: Tech leaders are haunted by the idea of ​​”the innovator’s dilemma,” first outlined by Clayton Christensen in the 1990s.

  • The innovator’s dilemma says that companies lose the ability to innovate once they become too successful. Incumbents are bound to protect their existing businesses, but that leaves them vulnerable to nimbler new competitors with less to lose.

Our thought bubble: The innovator’s dilemma accurately maps how the tech business has worked for decades. But the AI ​​debate is more than a business issue. The risks could be nation- or planet-wide, and humanity itself is the incumbent with much to lose.

.