With the proliferation of artificial intelligence (AI), it was only a matter of time before people started to categorically reject it en masse, leading to a trend that could be called ‘AI veganism’ thanks to its close resemblance to the food-related kind.
Indeed, the mass reluctance or outward rejection of AI is starting to look a lot like veganism, where an AI vegan would refer to someone abstaining from using the emerging technology in the same way a vegan abstains from eating products derived from animals, The Conversation writes on July 29.
Triggers behind AI veganism
As it happens, such strong negative emotions toward AI can stem from various things, including algorithmic aversion, a well-documented phenomenon where humans show bias against algorithmic decision-making, even if it demonstrates more efficiency. And AI is a set of algorithms.
One example that the author offers is a study according to which people would much rather take dating advice from other humans over advice from algorithms, even when the algorithms perform better. But there are other reasons that make AI avoidance even more similar to veganism.
Among them is the awareness that many content creators didn’t consciously consent to the use of their work in training AI, which sparked the Writers Guild of America and Screen Actors Guild – American Federation of Television and Radio Artists strikes in 2023, and which makes people more likely to avoid using AI.
Then there are environmental concerns, as research has demonstrated the computing resources required to support AI are rising exponentially, substantially increasing demand for electricity and water, with efficiency improvement unlikely to reduce the power drain due to a rebound effect – efficiency giving way to new technologies that need more energy.
Finally, a lot of people worry about the possible negative health impact of AI, as one study found that individuals more confident in using generative AI demonstrated weakened critical thinking, and the 2025 Cambridge University survey found that some students boycott AI, believing it could make them lazy.
And it makes sense – using AI and large language models (LLMs) like ChatGPT and Google Gemini to write your essays could be causing problems in cognition, in what the scientists at the Massachusetts Institute of Technology (MIT) have dubbed ‘cognitive debt.’
What do you think?
Join Techgaged on Telegram
Get first-access to daily trending tech stories, AI breakthroughs, and more, before it hits your feed.












