New York Times Guild Slams Paper’s AI Policies as ‘Woefully Inadequate’
Members of the New York Times’ union slammed the company‘s AI policies in a letter to management as “woefully inadqeuate” on Tuesday, citing TheWrap’s report on how a freelance book critic used AI for a Times book review as evidence that AI-generated content makes “readers lose trust in what we do.”
“Our dedicated human journalists — including and especially the Times Guild’s 1,500 members — make this paper a reliable source for millions of subscribers who want quality reporting and commentary,” the letter, signed by the union’s AI subcommittee members Isaac Aronow, Parker Richards and Lydia DePillis, read. “When the Times instead publishes AI-generated work, intentionally or not, our readers lose trust in what we do. This is unacceptable. At present, the Times’ standards on AI use are woefully inadequate.”
The letter, which was first reported by Axios, was addressed to Times CEO and president Meredith Kopit Levien, publisher A.G. Sulzberger, executive editor Joe Kahn and opinion editor Katie Kingsbury. It was also addressed to managing editors Marc Lacey and Carolyn Ryan, who are the management representatives in contract negotiations.
The staffers highlighted TheWrap’s report from last week, which revealed the paper was cutting ties with freelance book critic Alex Preston after it discovered he used AI to help write a review that incorporated elements of a Guardian piece on the same book. Preston told TheWrap he used the tool “improperly” and failed to catch “overlapping language” with the Guardian review, and the Times called the usage “a serious violation of the Times’s integrity and fundamental journalistic standards.”
The staffers said the Times’ current public guidelines on the technology are “often unclear or open to interpretation” as they said it places the burden on writers and editors instead of company leaders.
“The company calls on employees to use AI ‘transparently,’ but often fails to disclose how AI is used in stories (and, conversely, has at times claimed that AI did work that was in fact done by human Guild members),” the members wrote. “We are told to use AI ‘ethically,’ but given little guidance on what exactly that means.”
The guild, which represents roughly 1,500 Times staffers, did not specify to which stories it was referring. The guild has also asked for the company to include protections around AI in the performance review process, offer clearer disclosures over how the technology is used in stories and strengthen protections over how AI uses a Times staffer’s name, image and likeness.
Negotiations around AI have stunted talks between the Times and its guild as both sides have tried to hammer out a new agreement following its last contract’s Feb. 28 expiration.
Lacey told Times staffers in a letter on Tuesday that both sides agreed that “having strong AI guidelines and standards” would “ensure the integrity of our work and maintain the trust of our readers,” but noted that the guild’s quest to define those guidelines in the contract could dampen how the paper experiments with the evolving technology.
“Where the company conflicts with guild leadership is whether we write AI restrictions and prohibitions into a contract lasting several years,” he wrote. “AI technology is ceaselessly evolving – quickly – and we believe that this rapid change is precisely why we must remain flexible.”
Lacey also said both sides have tentatively agreed to disability accomodation language, a point the company previously tried to tie to its AI proposal.
AI negotiations have spread across newsrooms. Staffers at the Sacramento Bee and the Charlotte Observer, two news outlets owned by McClatchy, expressed concerns with management over a new AI tool meant to repurpose older stories under new headlines, and unionized ProPublica staffers staged a 24-hour walkout on Wednesday after contract talks — including over AI provisions — broke down.
The post New York Times Guild Slams Paper’s AI Policies as ‘Woefully Inadequate’ appeared first on TheWrap.