“The FTC has issued multiple warnings that existing consumer protection laws apply to AI, even as the administration and Congress struggle to outline new regulations,” and Senate Majority Leader Chuck Schumer (D–N.Y.) says new legislation could take months, notes the Post. “The FTC’s demands of OpenAI are the first indication of how it intends to enforce those warnings. If the FTC finds that a company violates consumer protection laws, it can levy fines or put a business under a consent decree, which can dictate how the company handles data.”

This latest FTC nastygram to a tech co (this one to OpenAI) reflects the short-term reality of #AI regulation in the US: We’ll witness a lot of jawboning and regulation-by-intimidation through implicit threats of undefined action,” tweeted Adam Thierer, an innovation policy analyst at the R Street Institute, explaining that by “nastygram” he means “a letter sent by a policymaker or agency that asks probing questions backed by implicit threats of undefined future regulatory action. It’s a long-standing agency practice, but one that the FTC is using more regularly to influence firm decisions.”