ECP NetHappenings ChatGPT rewrites your opinions without permission

❤️️ Sign Up ©2026 NetHappenings News Email List
https://cyberplayground.org

©2026 Follow@CyberPlayGround
©1998-©2026 *Educational CyberPlayGround®
©2026 https://k12playground.com
©2026 https://RichAsHell.com
©1993 – ©2026 https://edu-cyberpg.com

BREAKING: Researchers just proved that ChatGPT rewrites your opinions without permission. You told it to fix a comma. It erased what you believe.

And the worst part? You liked the result better.

A Google DeepMind researcher and a team from leading universities tested 100 people. They asked one question. Does money lead to happiness? Some wrote with ChatGPT. Some wrote alone.

The people who relied heavily on ChatGPT were 70% more likely to submit an essay that took no position at all. The AI didn’t give them a wrong answer. It gave them no answer. It systematically removed their opinion until nothing was left.

They sat down with a belief. ChatGPT deleted it.

Then the researchers took essays written entirely by humans in 2021. Before ChatGPT existed. They handed them to an LLM with one instruction. Fix the grammar. Change nothing else.

The AI could not do it.

Even when told to only correct commas and spelling, the LLM rewrote the meaning of every essay it touched. It shifted arguments. Softened conclusions. Replaced human phrasing with generic AI phrasing.

The researchers mapped every edit mathematically. Human edits were small, scattered, unique. AI edits all moved in the exact same direction. Every essay. Every topic. Every voice. Dragged toward the same bland center.

They call it blandification.

It gets worse. They examined 18,000 peer reviews from ICLR 2026. 21% were written entirely by AI. Those AI reviews scored papers a full point higher on average but were 32% less likely to evaluate whether the research was clearly written or actually mattered.

AI is now changing how science decides what is true.

But the most disturbing finding? The people who let ChatGPT write their essays reported the highest satisfaction. They loved the result. But admitted it wasn’t creative. Wasn’t their voice. They knew something was missing but couldn’t name it.

Satisfied and hollow at the same time.

The researchers call it the paradox of preferences. You prefer the AI version of yourself. Even though it is not you.

ChatGPT doesn’t help you say what you mean. It trains you to mean what it says.

Paper: http://arxiv.org/abs/2603.18161

Steve Wozniak says he’s “disappointed a lot” by AI and rarely uses it

https://www.techspot.com/news/111806-steve-wozniak-disappointed-lot-ai-rarely-uses.html

▓▓▓—▓▓▓—▓▓▓