In a meeting that involved more CEOs named Sam than one might expect at a vineyard retreat, the heads of major artificial intelligence companies gathered with U.S. President Joe Biden at the White House to promise they would not, in so many words, break the world. The CEOs of OpenAI, Alphabet, Microsoft and Anthropic have publicly committed to a list of voluntary safeguards for their rapidly evolving AI technologies, which, depending on your perspective, are either ushering in a golden age or quietly sending humanity toward a plotline from a mid-tier sci-fi film.
The commitments, which are not legally binding but do make for lovely PowerPoint slides, include measures like watermarking synthetic content so that when an AI generates an uncanny image of the Pope wearing Yeezys, viewers will at least be able to tell it was machine-made. The companies also pledged to conduct internal and external security testing, report major system risks and, presumably, buy at least one ethics book each for the office library.
Among the most charming of pledges was the promise to share information with governments, academics and civil society about the best and worst ways to develop AI. This noble gesture will surely see AI researchers the world over finally nodding in agreement rather than shouting in panic on social media platforms. In exchange for their honesty and watermarking prowess, the companies receive no fines, no new regulations and, perhaps most importantly, no awkward subpoenas—yet.
Meanwhile, critics pointed out that relying on self-regulation in the tech industry has historically gone about as well as hiring a pack of raccoons to guard a picnic. While the commitments are a step forward, they fall well short of enforceable rules, a fact that has not escaped the notice of lawmakers who have only just gotten the hang of email.
Still, the Biden administration lauded the announcements as progress, framing them as part of a broader effort to nudge the private sector towards not accidentally inventing Skynet during a coffee break. National Security Advisor Jake Sullivan noted the companies were showing good faith, which some might argue is easier to do when the alternative is appearing in front of a House committee being asked to explain why a chatbot bought someone a flamethrower.
The EU and China are, meanwhile, steaming ahead with their own binding AI regulations, a stark contrast to the American approach of polite requests and heavily worded press releases. Whether or not self-imposed responsibility will prevent platform-generated mayhem remains to be seen, but for now, the biggest names in AI appear eager to convince us they can build the future responsibly without having to read the instruction manual.
Because nothing says “safe innovation” quite like a pinky promise in a press conference.

