- Benj Edwards for Ars Technica: New OpenAI tool renews fears that “AI slop” will overwhelm scientific research. The writing has been on the wall for at least a year, but the capitulation of the academic publishing complex is shockingly fast even taking into account its rotten core.
- Tyler Kingkade for NBC News: To avoid accusations of AI cheating, college students are turning to AI. The article is even worse than the headline: professors and teaching assistants are the ones turning to AI to tell them whether a student used AI in their work, and will not accept any evidence to the contrary. To repent, you need to take a class on writing with integrity and write an apology. Something tells me the younger generations will not be kind to LLMs.
- Cal Newport: The Dangers of “Vibe Reporting” About AI. Newport flags vague reporting which attempts to associate AI adoption with productivity gains that lead to job loss. In Enshittification, Cory Doctorow uncovers this maneuver for what it is: a ball under cup scam that relies on our not paying attention.
- Simon Willison: Moltbook is the most interesting place on the internet right now. It is interesting in the same way reading about major pileups is interesting. And maybe we can even learn something from them! But I’d rather they didn’t exist because exposure to the phrases in AI-generated text raises my blood pressure.
- Rohit Krishnan: Epicycles All The Way Down. I became wary of all long texts on Substack, X LinkedIn and the like, but I trust this one about the way LLMs may or may not reason to be largely written by a human and the sections that aren’t are explicitly called out.
- Andrew J. Cowan for ASH Clinical News: Imagining the Ideal AI Partnership in Hematology. Good to see a fellow hematologist explicitly call out Doctorow’s reverse centaurs as something to be avoided at any cost. And yes, I have added the relevant book to the pile.
- On the off chance you understand Serbian, this week’s episode of Priključenija also fits the theme.