> “This isn’t just a new value of the Hubble constant,” the collaboration notes, “it’s a community-built framework that brings decades of independent distance measurements together, transparently and accessibly.”
Don't love that I can't read sentences like this without wondering if an LLM was involved.
Yeah it's sort of an LLM smell but honestly the models learned that pattern because it's common in the training data. People write that way because it sounds like they're revealing something profound.
LLM inference does not just regurgitate the training corpus; RLHF is almost certainly to blame for this. There’s probably some Google n-gram graph to prove it.
14 comments
> “This isn’t just a new value of the Hubble constant,” the collaboration notes, “it’s a community-built framework that brings decades of independent distance measurements together, transparently and accessibly.”
Don't love that I can't read sentences like this without wondering if an LLM was involved.
https://noirlab.edu/public/news/noirlab2611/?nocache=true&la...