The New York Times has ended its relationship with a freelance journalist Alex Preston after it emerged that artificial intelligence was used to help write a book review that contained similarities to previously published work. The case has reignited debate around the role of AI in journalism and the ethical boundaries of its use in editorial content. The issue came to light when a reader noticed striking similarities between a review published in January and an earlier critique of the same book. The review in question focused on Watching Over Her by Jean-Baptiste Andrea and raised concerns over overlapping language and descriptions. Investigation Reveals AI-Assisted Writing Following the complaint, the publication launched an internal investigation. During the process, the writer admitted to using an AI tool to assist in drafting the review. According to the findings, the tool incorporated material from another published review, which was not properly identified or removed before submission. An editor’s note was later added to the review acknowledging the issue. It stated that the use of AI and inclusion of unattributed material constituted a breach of editorial standards. Journalist Issues Apology The freelance writer acknowledged the mistake and expressed regret over the incident. In a statement, he said he was “hugely embarrassed” and admitted that he had “made a serious mistake” in relying on AI during the drafting process. He further clarified that he had not used AI in his previous work for the publication and had immediately taken responsibility after the issue was identified. Overlap With Existing Review Raises Concerns The controversy centered on similarities between passages in the published review and an earlier critique of the same book. Descriptions of characters and thematic conclusions appeared closely aligned, raising concerns about originality and attribution. The publication subsequently informed the outlet where the original review appeared and updated its own article to reflect the issue. Growing Debate Over AI in Journalism The incident highlights increasing concerns within the media industry about the use of artificial intelligence in content creation. While AI tools are becoming more common, experts warn that unsupervised use can lead to issues such as plagiarism, factual inaccuracies, and erosion of trust. The case also underscores the importance of maintaining strict editorial standards as news organizations adapt to rapidly evolving technology. A Broader Industry Challenge The controversy comes at a time when publishers and media organizations worldwide are grappling with how to regulate AI use. From book publishing to journalism, the rise of AI-generated content has raised questions about authorship, originality, and accountability. For major publications, maintaining credibility remains paramount, and this case serves as a reminder of the risks associated with relying on automated tools without proper oversight.