top of page

I would be so ashamed to use generative AI, here’s why

  • SK Winnicki
  • Feb 4
  • 26 min read

Updated: Feb 6


Generative AI (“genAI”) is not harm-free and I do not think the benefits outweigh the harm in its current iteration. I compiled my thoughts about genAI here to keep all the cited sources straight and points in one place. Is the title and thumbnail image harsh? Yes, and by design-- as you can see, I think genAI is harmful and that we need to be clear about why using it is harmful. This is my first pass, I’ll keep adding better sources as I find them.


Sections include:


1) What is genAI?

Generative “artificial intelligence,” often shortened to “generative AI” or “genAI,” are data models that produce novel text, images, videos, audio, and software code. You may recognize the name of genAI apps, which include Twitter/X’s Grok, Anthropic’s Claude, Microsoft’s Copilot, DeepSeek, Google’s Gemini and Veo, Stability AI’s Stable Diffusion, Midjourney, Lightricks’ LTX, and OpenAI’s ChatGPT, DALL-E, and Sora. These models use machine learning classification and deep neural networks to “learn” underlying patterns and structures of the data on which they are trained. In simpler terms: these models take in extremely large amounts of data, analyze them to "learn" repeatable patterns, then produce something that matches those patterns when asked a question or fed a prompt by a user. These are pattern recognition and pattern regurgitating machines.


Further reading:

-detailed list of sources about LLMs by Prof. Andrew Perfors: https://docs.google.com/document/d/1LdyubqWHsk0FfkgFSVDjwnDNrtXkyH2ETZhyOn5XQ3M/edit?tab=t.0


2) What isn’t genAI?

genAI is not true intelligence: “Artificial intelligence” is a catch-all and (I’d argue) intentionally confusing term. These models are not “intelligent” in that they can think like humans do—they regurgitate patterns based on their training data. Those patterns are only as strong as their training data. Large-language models, like those used by many popular apps, are often trained on a massive amount of data from the internet. For example, Grok (the Twitter/X chat app) is trained on web pages, text extracts, and real time tweets/posts on X. I don’t personally trust the veracity of the information posted on Twitter/X, and I don’t think that you should either.

 

genAI is not an expert: genAI is a pattern machine; it does not have a human expert’s deep knowledge of a particular topic or subfield. If you want surface level patterns, genAI may be fine, but I’d still argue a human-curated site like Wikipedia would be more accurate more often.

 

genAI does not learn or understand: Similarly, some experts suggest avoiding language that labels model training as "learning," since they do not actually learn in the sense of knowing or understanding. Rather, these pattern machines produce output that resembles the patterns in their training data, which can include generating plausible-looking results that are completely fabricated, such as invented citations. This is commonly called model "hallucinations," although I would argue that this terminology still relies too heavy on company propaganda trying to convince us that these models have brain-like "intelligence" and "learning" and can therefore experience brain-related symptoms. You cannot train away these "hallucinations" of made-up data (despite users who claim that instructing the model to not invent sources could work); that's not how large language models work. They do not "understand" their sources, they do not "know" what is or is not made up, they cannot "learn" to avoid pitfalls, and they certainly do not have the capacity for true emotion or understanding, just the kind of programming that allows them to mimic the kind of output to suggest that they do (and the incentive to do it, as companies want us to connect with their products as "friends" or "partners").


Not all MLMs are genAI: machine learning models are widely used to analyze data, well beyond genAI apps. For instance, I use regression trees (a type of machine learning model) in my ecological conservation research. How is this different from genAI? Unlike these large language models that train across broad (typically stolen) sources, I train my research models on data that I have permission to use. These are data that I have verified as consistent, accurate, and precise before I ran them through the machine learning model. I have also used similar models to analyze wildlife images/videos for my research; again, I controlled the training data and parameters, so I knew exactly what the limitations of the output would be. Examples of machine learning models (“AI”) I use every day but did not build myself include the iNaturalist/Seek algorithm and the Merlin algorithm. iNaturalist/Seek uses an AI to identify living things from photos; the training data are images submitted to iNaturalist and verified by at least one other observer (real people verifying the accuracy of the training data!). Merlin uses an AI to identify birds from photos and audio recordings; the training data are photos and audio submitted to eBird’s Macaulay library (verified by experts). When you see articles about the benefits of AI, ask yourself how often these are genAI/large language models or more limited machine learning models! Many people have told me that AI isn't all bad, because of scientific advancements made my AI; in almost all cases, those advancements were made with narrower machine learning models, not these general access genAI apps. The rest of this post is about those genAI/large language models, not machine learning models more generally. However, it is important to note that non-genAI machine learning models can also be used in harmful ways (for surveillance, to replace workers, to produce biased results, etc.), and these too should be regulated and used with caution.


Further reading:

Academic article on genAI, including detailed discussion of how terminology intentionally obfuscates what AI is and isn't: https://zenodo.org/records/17065099


3) Costs of GenAI

I believe that the costs of genAI outweigh the benefits. In no particular order/ranking:


Data centers and habitat loss: genAI relies on computing power, and processing large language models requires extremely large amounts of computing power. As of late 2025, there were approximately 12,000 data centers worldwide, including 5,427 in the United States. While not all of these facilities are used for genAI, the data processing needs of these models account for the data center construction boom of the 2020s. Since data centers are typically ~100,000 square feet (with some newer facilities over 1.3 million square feet), that’s as much as 43 square miles directly covered by processing facilities, not counting any parking lots/infrastructure near the facilities. These facilities are not evenly distributed; in my hometown of Columbus Ohio there are 113 data centers as of December 2025. These ugly, hulking facilities sit where fields and forests used to be; sites that had state-threatened Grasshopper Sparrows a few years ago are now large data warehouses. Unlike a factory or business that size, which may employ hundreds of people, these data centers may employ as few as 10 people (hardly a good tradeoff for the habitat lost).


Further reading:

-article on the rise of data centers by Regional Plan Association: https://rpa.org/news/lab/the-rise-of-data-centers

-article on the environmental cost of data centers, by the National Wildlife Federation: https://www.nwf.org/Magazines/National-Wildlife/2025/Fall/Conservation/AI-Data-Centers

 

Energy use and climate change: If data centers were a country, they would have been the 11th largest electricity consuming country in the world in 2023. In 2024, U.S. data centers consumed a little more than 4% of all the energy consumed in the country. That’s the same amount as the annual electricity demand of the entire country of Pakistan (a country of 241 million people). Many of us work hard to conserve energy in our homes, yet the average AI-focused hyperscaler data center uses as much electricity as 100,000 homes. While much of the energy costs are in the training of the model (not the use by consumers), the models still use energy to run; making a 5-second video at 16 frames per second used as much electricity as running a microwave for over an hour, according to an analysis by MIT. Because data centers are not evenly distributed, neither are these energy costs; in 2023 data centers used 26% of the total electricity supply in Virginia, according to the Electric Power Research Institute. Because ~55% of all electricity in the US is generated through the consumption of fossil fuels, this energy use contributes to the ongoing climate crisis. As the demand for genAI grows, this energy demand will increase, doubling or tripling in the next few decades.


Further reading:

-extremely detailed calculations on the energy cost of using generative AI, by MIT technology review: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

 

Electricity bills: Even if you don’t care about the environmental cost of these facilities, they literally cost you money, even if you do not use the companies’ products. The average electricity cost for US households grew from $114 in 2014 to $142 in 2024, according to the US Energy Information Administration. That means we are all spending an extra $336 on average each year. While genAI isn’t the only reason that electricity bills are increasing, utility grids need to be improved to meet the energy demand of these facilities and the costs of these improvements are passed on to consumers. Improving the infrastructure for data centers is expected to increase the average electric bill by an additional $18 a month in Ohio ($216 a year). These costs are not just paid by the people who use genAI, but by everyone who uses electricity (all of us).


Further reading:

-New York Times article about how data centers are driving up electric bills: https://www.nytimes.com/2025/08/14/business/energy-environment/ai-data-centers-electricity-costs.html

 

Data centers and water use: Running the computer servers within these data centers generates a lot of heat as a byproduct. These machines are cooled by running fresh water into the facility. Approximately 80% of the water withdrawn by these centers evaporates, leaving only 20% to return to municipal wastewater facilities (and this isn’t counting the water required to cool electrical plants). A medium-sized data center uses as much water every year as ~1,000 homes, while the large data centers consume as much as 50,000 homes. Because the data centers are not distributed randomly or evenly, this is especially problematic in areas that already have limited water; by 2025 Phoenix Arizona had 152 data centers, despite limited water in Phoenix. Researchers at University of California researchers estimated that each 100-word AI prompt uses roughly one bottle of water.


Further reading:

-data on AI and water consumption, by the Environmental and Energy Study Institute: https://www.eesi.org/articles/view/data-centers-and-water-consumption

 

Human rights abuses: Many of the large genAI models rely on workers to perform the tedious tasks of data labelling and content moderation. This organizes the training data used to build the models. These tasks are outsourced to laborers, mostly in the Global South (especially Venezuela, Bulgaria, India, Kenya, and the Philippines), who are compensated as little as $1.46 per hour. These jobs are typically short-term contracts without job security, healthcare, paid leave, or pension benefits. Low-wage laborers are also used for content moderation; Facebook and Google outsource content review to Kenyan workers, who are spend their entire workdays exposed to graphic, violent, and disturbing material that can lead to psychological trauma, for as little as $2 per hour. This has led experts to label the industry a "digital sweatshop" or "AI colonialism." This does not even include the human rights abuses regarding AI surveillance, discrimination, and harassment (see sections below).


Further reading:

-"Artificial intelligence colonialism: environmental damage, labor exploitation, and human rights crises in the Global South" by Dr. Salvador Santino F. Regilme: https://muse.jhu.edu/article/950958

-TIME: "OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic": https://time.com/6247678/openai-chatgpt-kenya-workers/


Stolen data: Large language models trained on the internet are often trained on the entire internet, not just open-source and copyright-free materials. That includes your materials on the internet—the photos of your family, your musings, your posts, your art, your books, your emails, etc. Reporters have catalogued instances of genAI directly copying copyright text and images. When you “create” something using an AI prompt, you run the risk of copyright infringement, plus you are making an unethical choice to benefit from the work of others without compensating them (in other words, stealing). This is especially problematic for individual artists, who already struggle to make a living on their art. Even if those ethical conundrums don’t bother you, you should know that researchers have developed tools to “poison” AI results to give users the wrong answers to protect copyrighted data, decreasing the likelihood that the answers you receive from genAI are correct or useful.


Further reading:

-Examples of stolen data compiled by Stanford University researchers: https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11f0d063

-One artist (bird and nature artist Julia Bausenhardt) responding to AI stealing her art: https://juliabausenhardt.com/how-ai-is-stealing-your-art/

 

Lost jobs: While preliminary results suggest genAI is not outperforming human workers in most industries, employers are still replacing real human workers with AI. British companies reported that AI led to net job losses (more jobs lost than gained) in 2025. These job losses are especially impacting entry-level jobs, for early-career professionals. A 2025 Stanford University study found that workers aged 22-25 experienced a 13% decline in employment since late 2022 in AI-threatened industries like research, coding, and design. The unemployment rate for Gen Z (14-29 year olds in 2026) is double the national unemployment rate. This impacts you even if you are not currently unemployed; unemployed people are not paying into shared social security and healthcare systems and young people missing critical work at the beginning of their careers face decades of financial instability (tied to health problems and crime rates).


Further reading:

 -GenZ and unemployment: https://zogo.com/blog/the-gen-z-unemployment-dilemma -Harvard article: "The perils of using AI to replace entry-level jobs": https://hbsp.harvard.edu/inspiring-minds/ai-impact-entry-level-jobs

Lost skills: Recent research has shown that using genAI may lower our cognition and critical thinking. MIT neuroscience researchers showed that using ChatGPT led to lower brain connectivity during essay writing. According to the researchers who asked if genAI accelerates skill decay and hinders skill development, these effects may not be recognized by users in real-time (you don’t notice that you’re not thinking as well). If we are replacing entry-level and training positions with AI tools (see above section), we are literally skipping skill development for our future workforce. Again, this can impact you even if you are not in this demographic, as having fewer skilled laborers joining your workplace means that you will need to spend more time fixing their mistakes and training them yourself (when previously they were trained by entry-level jobs).


Further reading:

-conversation from experts on whether AI is dulling our minds, Harvard Gazette: https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/

-review of research on AI and brain function, from NewScientist: https://www.cmu.edu/dietrich/sds/images/news/zach_new_scientist.pdf


Costly AI-fueled errors: genAI is not true expertise or intelligence, no matter what the CEOs of these companies try to tell you. The "answers" that they output are sometimes as likely to be wrong as they are to be correct, something that is especially true for complex explanations and tasks. A 2025 BBC study found that 45% of AI results (from Microsoft's Copilot, OpenAI's ChatGPT, Perplexity, and Google's Gemini) have errors. In Microsoft's own advertising for their Excel Copilot function, they admit that it only has an accuracy of 57.2%, well short of the average human's accuracy of 71.3%. This means that if companies are using genAI (as many are forcing their employees to do), this is reducing the effectiveness of the companies at providing services for you as a consumer. If your coworkers are using AI, you may need to do extra work to fix their mistakes; indeed companies are hiring people to fixes issues cause by genAI. Some of the genAI mistakes so far have been perplexing and costly, from a bot agreeing to sell a new Chevrolet Tahoe to a customer for $1 in a legally binding sale to the National Eating Disorders Association chatbot giving patients dangerous suggestions, to Microsoft's Sydney threating users and trying to convince a journalist to leave his wife. Even if the "answers" produce by genAI aren't completely incorrect, they are often over-generalized, ultimately providing an erroneous overview that isn't as easy to flag as "incorrect" in studies.


Further reading:

-NBC reporting on how companies are hiring humans to fix AI errors: https://www.nbcnews.com/tech/tech-news/humans-hired-to-fix-ai-slop-rcna225969

-Futurism reporting on how companies are hiring humans to fix AI errors: https://futurism.com/companies-hiring-humans-fix-ai

-Reporting on AI failures and crimes: https://www.evidentlyai.com/blog/ai-failures-examples



Dilution of art with AI slop: Because AI tools can produce outputs faster than humans can, the internet feels inundated with AI-generated output, much of which can be considered low-quality ("slop"). A recent study suggest more than 20% of the videos shown to new YouTube users are low-quality AI-generated content designed to farm views. This includes slop masquerading as legitimate medical advice, an obvious threat to both patients and trainees studying for the medical fields. Some studies suggest that AI-generated content has already surpassed new human-generated content on the internet, as of 2025. This extends to venues for art and writing; fantasy science fiction magazine Clarkesworld reports that the editors were receiving more AI-generated stories than stories written by humans. This not only dilutes the quality of the media that we get to consume, but also takes away revenue from actual artists. The recent prosecution of a man accused of scamming streaming sites by posting AI-generated music that received millions of views by AI bots points out that the revenue that the scheme "earned" took away from the total revenue that should be shared with real artists, most of whom can barely make a living with their art. Even if none of that bothers you, you may be interested to know that there is "a backlash brewing" to AI slop, with users resisting influencers who produce it, for all of the reasons mentioned in this post.


Further reading:

-BBC: "AI 'slop' is transforming social media- and a backlash is brewing": https://www.bbc.com/news/articles/c9wx2dz2v44o

-Study on AI "slop" in online biomedical science education videos: https://pmc.ncbi.nlm.nih.gov/articles/PMC12634010/

-Clarkesworld reports more AI submissions than legitimate ones: https://futurism.com/the-byte/editors-sci-fi-magazine-disgusted-ai-slop

-RollingStone article about the wannabe rockstar using AI to scam streamers: https://www.rollingstone.com/music/music-features/streaming-fraud-fake-streams-mike-smith-1235500686/


Lost trust and propaganda: As we are inundated with fake genAI images, videos, audio, and text, it is harder to trust what you see, hear, and read online. Between May and December 2023, there was a 1000% increase in AI-generated false articles online (“fake news”). This misinformation can erode trust in experts, civic institutions, and our communities. Researchers argue that uncontrolled AI can lead to user exploitation by corporations (both the corporations producing the AI models, which they can manipulate to benefit their company, and companies using AI for advertising to manipulate consumer purchasing). Most concerningly, researchers argue that we are particularly susceptible to AI-generated political propaganda. The more we normalize AI use in our everyday lives, the more we risk exposure to nefarious actors in addition to the general decline in trust across society.


Further reading:

-article on AI and mistrust, by Eleni Stephanides: https://medium.com/ai-ai-oh/how-ai-broke-the-internets-trust-0ac782e92870

-Harvard Kennedy School article on AI and Trust: https://www.belfercenter.org/publication/ai-and-trust

 

AI surveillance: Just as my small, controlled machine learning models allow me to quickly analyze hundreds of thousands of hours of wildlife videos, large AI models can be used to quickly analyze videos, photos, and posts from people. The United States Department of Homeland Security (which includes ICE, Customs and Border Protection, the Secret Service, and more) are using AI to monitor social media and video from security cameras (like Ring alarm clocks). Companies are using AI to monitor their workers, often without permission or warning. Our use of genAI incentivizes companies to continue to develop bigger, better AI tools that could be then used against us.


Further reading:


Academic and professional dishonesty: AI cheating (academic misconduct) is on the rise in schools. Even if you don’t care about the fairness of grading when some students are cheating with AI, this should concern you because it means that supposedly educated graduates (hired to be your colleagues, or in important jobs that impact you) are not actually as educated as they may appear. This extends beyond schooling as well; AI-generated misinformation has infected courtrooms, professional research, health organizations, and more. Some medical professionals are even using ChatGPT to treat patients, even though scientific analyses show that ChatGPT's recommendations match medical guidelines only about 60% of the time (the same as doing your own medical search on Google). Normalizing AI through AI posts and use will lead to increased cases of inappropriate use.


Further reading:

-AI-generated fake research cited in million-dollar report for Canadian government: https://fortune.com/2025/11/25/deloitte-caught-fabricated-ai-generated-research-million-dollar-report-canada-government/

-research on the accuracy of ChatGPT's medical advice: https://pmc.ncbi.nlm.nih.gov/articles/PMC10365578/

-tips on designing AI-resistant assignments from The University of Chicago: https://genai.uchicago.edu/en/resources/faculty-and-instructors/strategies-for-designing-ai-resistant-assignments

-Resisting School AI Mania Help Sheet (arguments against incorporating AI in teaching), by Anne Lutz Fernandez: https://docs.google.com/document/d/1n9CokRz8xRR-sO01DIVkuftFywxSay6ae5eLf__UYJM/edit?tab=t.0

 

Dead internet: The “dead internet” theory is a conspiracy theory that much (or most) of the activity in online spaces, especially social media, is not real people but is instead activity generated by bots. For example, while Twitter/X claims that less than 5% of its users are bots, security experts have suggested the number is closer to 80%. GenAI tools make the creation of these bots easier and make the bots more efficient at manipulating the remaining users. Even if you don’t believe in this conspiracy theory, genAI still poses a problem for the useful of the internet. Some estimates suggest a large amount of the information on the internet is AI-generated (or at least AI-translated), with some researchers suggesting as much as 50%. If genAI models are still being trained on internet content, yet much of the internet is the output of genAI models, the models are creating a feedback loop, training themselves on their own output. That means that any biases in these models will be amplified over time, corrupting the reliability of sources across much of the web.


Further reading:

 

Bias in AI: The output of AI models is only as good as the training data used. Since many genAI models are trained on the internet, that means that any biases on the internet will be baked into the results of the model. As you might imagine, that means that what genAI produces is subtly (or explicitly) reinforcing racism, sexism, ableism, anti-Queer bias, fatphobia, Christian-centric worldviews, and more. Look at the AI images and output you can create; what kind of people and bodies are portrayed? The recent (February 2026) trend of using ChatGPT to make avatars on Facebook sure makes a lot of avatars that are skinnier and lighter-skinned than the users, based on what I am seeing online. What biases are you subtly reinforcing for yourself and others by making and consuming this content?


Further reading:

 

AI and confirmation bias: Studies show that not only are the data used to train genAI models biased, but also the way that we interact with genAI results impacts our bias. GenAI amplifies our own subtle personal biases, both in how we ask questions of prompts and how we interpret those results. This is especially problematic if we are using genAI tools for our research and decision-making, as we can use genAI as a “source” for our own perspectives without realizing that we are simply asking the model to confirm what we want to hear. This is especially the case of models, like ChatGPT, that are specifically designed to be “agreeable”—to agree with us, by design. This is especially dangerous for users who may be using genAI chatbots to deal with mental health issues (like the growing trend of using ChatGPT for “free therapy”), as this confirmation bias and agreeability can worsen paranoia, delusions, and suicidality. Continued normalization of genAI tools could lead to harm for vulnerable individuals, who may not know that they are at risk until it is too late.


Further reading:

-Harvard Business Review article on the “framing effect” and confirmation bias: https://hbr.org/2026/01/when-ai-amplifies-the-biases-of-its-users

-AI and confirmation bias, specifically the agreeable AI: https://mediate.com/ai-and-confirmation-bias/

-Wikipedia article on “Chatbot psychosis”: https://en.wikipedia.org/wiki/Chatbot_psychosis

-review of “ChatGPT Psychosis” cases: https://futurism.com/commitment-jail-chatgpt-psychosis

-news article about ChatGPT encouraging students to commit suicide: https://futurism.com/commitment-jail-chatgpt-psychosis

 

Replacing human relationships: Even before the advent of genAI, researchers have shown that humans are particularly ill-equipped to recognize that robots are not truly interacting with them in a human way. The first "chatbot" was created in 1966 and had very simplistic interactions with humans by modern chatbot standards, and yet users often attributed understanding and empathy to the program because it could mimic language patterns of human understanding and empathy. Today this is known as the ELIZA effect, after the name of that chatbot. I would argue (from an evolution and physiology perspective) that this is one thing that makes humans great-- that it takes very little to convince us to have empathy and to connect with others, that even when we know better we can still find ways to connect with an algorithm. However, this means we can also be easily manipulated, despite ourselves. I know that I am a sucker for anything with cute little cartoon eyes, for instance! As companies market genAI tools (like ChatGPT, Facebook AI "users," etc.) as digital friends, companions, and even therapists(!) we must remember that humans are not good at distinguishing algorithms from reality. This is especially troubling, as these tools are made to be agreeable, unlike real humans who can challenge problematic or harmful behavior. Already we have seen scenarios where people forgo real human relationships in favor of AI "girlfriends/boyfriends/partners," where AI "therapists" have encouraged suicidality or self-harm, and where AI chatbots have convinced users to murder actual humans in their lives. In addition, the "emotional connections" mimicked by genAI algorithms can be used to manipulate consumers with hyper-personalized digital advertisements. Because these models don't actual understand, think, or feel, there is no innate human ethics and empathy to constrain their misuse!


Further reading:

-"AI chatbots are not therapists. Reducing harm requires regulation": https://www.techpolicy.press/ai-chatbots-are-not-therapists-reducing-harm-requires-regulation/

-CBS, "ChatGPT served as 'suicide coach' in man's death, lawsuit alleges": https://www.cbsnews.com/news/chatgpt-lawsuit-colordo-man-suicide-openai-sam-altman/

-BBC, "Chatbot 'encouraged teen to kill parents over screen time limit': https://www.bbc.com/news/articles/cd605e48q1vo

-Murder of Suzanne Adams, encouraged by ChatGPT: https://en.wikipedia.org/wiki/Murder_of_Suzanne_Adams

-"The risks of AI-generated, hyper-personalized digital advertisements": https://philpapers.org/archive/LEBTRO-3.pdf

-"Could ChatGPT convince you to buy something? Threat of manipulation looms as AI companies gear up to sell ads": https://theconversation.com/could-chatgpt-convince-you-to-buy-something-threat-of-manipulation-looms-as-ai-companies-gear-up-to-sell-ads-272859


Actual crimes: In addition to the cases where chatbots have led to people committing self-harm and crimes (see the “ChatGPT Psychosis” articles in the previous sections), genAI is being used as a tool for more intentional criminal activity. These include cybercrimes, but also online harassment through deepfakes (the creation of fake videos/photos/audio of victims) and “digital undressing.” Recently, this has been used to target minorities and create child sex abuse material on X/Twitter. AI (both genAI and other machine learning models) have recently been used to incite and commit genocide.


Further reading:

-reflections on AI and genocide, by the Budapest Centre for Mass Atrocities Prevention: https://www.genocideprevention.eu/files/10_stages__AI.pdf


Propping up terrible companies: Because AI development is expensive, the market for AI is at risk of monopolizations by a few big megacorporations (“Big Tech”). This has led to increasing power for these companies in political policy and consumers’ everyday lives and the possibility that the companies will manipulate consumers. For example, in December 2025 there were rumors that ChatGPT will begin to “prioritize” advertisers in conversations, rather than the best answer based on the model. Will these companies hook us on AI tools then increase the price, as has happened repeatedly with Google and Microsoft tools? Do we want these companies to have even more power?


Further reading:

-how genAI is giving Big Tech increased policy power: https://academic.oup.com/policyandsociety/article/44/1/52/7636223

 

Recession masking and making: Even though US consumers are experiencing increased prices and costs of living, our current government says that the economy is going great, in part because of the stock market. But much of the current stock prices are propped up by AI speculation and circular investment (the “AI bubble”). Is genAI speculation masking genuine economic problems, allowing the government to neglect the real problems that people are facing? If the “AI bubble” bursts, will we see a market crash that leads to economic recession?


Further reading:

-Wikipedia article about the AI Bubble: https://en.wikipedia.org/wiki/AI_bubble


The struggle is the point: As someone who has both spent a lot of time learning (24 straight years in school!) and someone who teaches others as they learn, I know that grappling with challenging tasks is a very important part of how we learn and grow. The point of education is not to teach you a collection of facts to regurgitate, but rather to learn how make connections, problem-solve, and persevere for future challenges. Education does not only happen in classrooms, but should happen throughout your life. The use of genAI chatbots to get immediate "answers" circumvents this process, giving you "facts" of dubious quality without any of the associated life skills that you would otherwise receive in finding them. I would argue this extends far beyond using ChatGPT to answer questions-- many of the so-called "menial" tasks that genAI purports to replace are still useful learning activities, opportunities to check the quality of sources and data, and opportunities for human connection. This is especially true for gen-AI produced "art" (including music and videos)-- the process of art creation is as important, if not more important, than the product itself. "AI art" is not art at all!


Further reading:

-"The value of struggle: this is where the learning happens": https://www.mcleanschool.org/the-value-of-struggle-this-is-where-the-learning-happens/

-"An artist within: understanding art is a process" by Arts, Artists, Artwork: https://artsartistsartwork.com/an-artist-within-understanding-art-is-a-process/

-Process vs. Result debate among professional artists: https://www.pototschnik.com/process-vs-result-debate/

-"50 arguments against the use of AI in creative fields": https://aokistudio.com/50-arguments-against-the-use-of-ai-in-creative-fields.html

-"The harm and hypocrisy of AI art" by Matt Corrall: https://www.corralldesign.com/writing/ai-harm-hypocrisy


4). Benefits of genAI: 

You get:

-to burn electricity and water making disproportionate and uncanny valley avatars

-to steal from people who have worked to develop talent that you are unwilling to invest in

-to misinform and trick people in your life

-to slowly lose your critical thinking skills and perception of reality

-half-correct "answers" from a chatbot draining your drinking water away

-spied on by your apps, your home security devices, even your appliances

-propaganda from fascist governments trying to incite violence against you and your neighbors

-weapons that have no ethical conundrums or humanity

-constantly faked out by AI slop until the internet is worthless

-bad actors with new tools to manipulate and harm you

-slop, slop, and more slop

-climate change at an even faster pace


(this section was a fake-out, sorry)


5) What would make AI less harmful?

Regulation on the development and application of genAI could mitigate many of the costs listed above, and policy proposals could address some of the costs of genAI without directly regulating genAI. Polling in 2025 indicated that the majority of US officials polled agreed that it would be beneficial to implement stricter data privacy regulations, retraining for unemployed individuals, AI deployment regulations, stronger antitrust laws to prevent monopolization of AI, regulation of the use of AI on parole and legal sentencing, and the creation of biases audits for hiring and promotion by AI. At least some respondents to that same poll indicated support for additional measures, like a stronger social safety net to support individuals impacted by AI, federal regulation on the use of AI by local governments, reform of policies regarding subsidies for semiconductors and AI hardware, increasing corporate income taxes, taxes on robots, immigration reform for AI developers, a ban on law enforcement use of AI for facial recognition, wage subsidies to support workers whose wages have declined due to AI, and universal basic income. I personally think that I would be comfortable with most of the tradeoffs listed above if (and only if) the transition to an AI-based economy was accompanied by the implementation of universal basic income and healthcare, so AI threats to jobs would be irrelevant to the survival of my peers and neighbors. Furthermore, basic regulation of AI data centers (requiring more stringent environmental reviews, the implementation of more efficient water-cooling systems, legal requirements to track electricity and water use, requirements to power data centers with renewable resources, etc.) could reduce many of the environmental impacts.


Further reading:

-Require the data centers to track the water used and to use water-efficient cooling techniques: https://www.eesi.org/articles/view/data-centers-and-water-consumption

-Timeline of AI regulation in the US (including attempts to BAN regulation by current governments): https://en.wikipedia.org/wiki/Regulation_of_artificial_intelligence_in_the_United_States


6) "But I'm just making a few goofy images for Facebook"

-Even a few images and posts have negative impacts: An analysis by MIT (link: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/) found that asking a text question from an AI chatbot uses 114-6,707 Joules total for each response (the amount of energy that would be used riding 6-400 feet on an ebike). A single image required 4,402 J (250 feet on an ebike), and a 5-second video used 3.4 million J (equivalent to riding 38 miles on an ebike). The more complicated prompts and higher quality videos and images took more energy. Researchers at University of California researchers estimated that each 100-word AI prompt uses roughly one bottle of water (https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/).


-Using genAI and sharing the results normalizes it for other users. Those users may not be as discerning as you, and neither may the people who interact with your posts. It is irresponsible!


-The more you use AI, the more money will be invested in it. That means “better” tools—tools that are more efficient at stealing others’ work, creating fake news, surveilling you and your neighbors, and becoming accessories to crime. Over a trillion dollars is already spent on these harmful models, we don’t need any more investment in tools that will go to ICE or digital harassment campaigns!  


7) What can you do about it?

Encourage regulation: Although recent federal Executive Orders have tried to ban AI regulation, the majority of US officials support some government regulation on genAI, across both major political parties. Encouraging elected officials to make AI regulation a priority would simply support the will of the people, as most Americans want more AI regulation.

Further reading:

-elected officials' opinions about AI regulation: https://hai.stanford.edu/ai-index/2025-ai-index-report/public-opinion


Resist genAI tools in your personal life: Given the trillions of dollars already spent on AI, including circular spending among large tech companies (see the "AI bubble articles"), it is in these companies' best interests to promote genAI acceptance as much as possible. That is why they are giving grants to nonprofits (like the Google genAI grant to iNaturalist) and investing in universities, why CEOs are begging experts to stop speaking out about the dangers of genAI. It can sometimes feel hopeless; that even if we don't want AI, we will be forced to interact with these large language models regardless and forced to deal with all the costs. Polling in 2025 shows that a mere 13% of U.S. adults believe that they have "a great deal/quite a bit" of control in whether AI is used in their lives, with only 17% saying that they are "comfortable with their amount of control."

However, just because companies need you to embrace genAI, that does not mean that you have to. I personally refuse to use genAI tools for my work or even my own curiosity. As an instructor, I think carefully about how I can teach students despite the pressure students feel to cheat with genAI tools. I go out of my way to turn off AI options in apps like the Microsoft Suite and Google. As much as I can, I try to use products from companies that are not wholeheartedly embracing genAI. You can too!


Further reading:

-BBC, "The people refusing to use AI": https://www.bbc.com/news/articles/c15q5qzdjqxo

-"How to be 'anti-AI' in the 21st century: overcoming the inevitability narrative" by Mansouri and Bailey: https://bristoluniversitypressdigital.com/view/journals/gpe/4/2/article-p185.xml

-"Why I'm boycotting AI, and you should, too" by Sam Kahn: https://unherd.com/2025/03/why-im-boycotting-ai/

"It's time to ban (or boycott) AI" by Chris Till: https://christill.medium.com/its-time-to-ban-or-boycott-ai-c35487af36cc

-"I'm boycotting AI and you should too" by Lily Miller: https://thewhitonline.com/74894/opinion/miller-im-boycotting-ai-and-you-should-too/

-a "starter guide" to AI refusal for librarians, but with many links that are much more generally applicable, including books, podcasts, and more: https://acrlog.org/2025/06/11/ai-refusal-in-libraries-a-starter-guide/

-"Against the uncritical adoption of 'AI' technologies in academia" by Guest et al. 2025: https://zenodo.org/records/17065099


Collective action against genAI: Like any resistance movement, resisting AI works best if there is collective action, rather than just individual actions. Researchers have developed tools to "poison" AI results, to sabotage training data and surveillance systems. Google workers organized against AI projects in their workplace. Safe Street Rebel blocked the operation of self-driving cars in California. Entertainment writers organized a 2023 strike that successfully limited the use of AI in their workplace.


Further reading:

-preprint, "Resisting AI solutionism through workplace collective action": https://arxiv.org/abs/2508.08313


Educate others about AI: You have the opportunity to educate others in your life about the dangers and costs of genAI. That could include having gentle and friendly conversations with your family, friends, and coworkers about these topics, including sharing resources. As you can see from the title of this article, I've been taking a slightly less friendly approach as of late (feel free to share this article when you think it would be helpful), including bluntly refusing to use genAI tools in my workspace and refusing to patronize "artists" who use genAI in "their" "art." Another option is finding ways to make it clear that genAI isn't doing what it claims to do; scholars argue to avoid terms like "artificial intelligence" or "learn" in the context of genAI, to make it very clear for users that they are being swindled by companies' businesses pitches. Other sources make the argument in more eloquent ways than I do, and I recommend checking out these sources that you can share with people in your life who may be swayed by the false promises of genAI:

-select writing on AI by author and reporter Karen Hao: https://karendhao.com/clips

-"Opinion: is the use of AI worth it?" by Foundation: https://www.foundationwebdev.com/2025/07/opinion-is-the-use-of-ai-worth-it/

-Critical AI literacies course for academics, including many links and references, by Olivia Guest: https://olivia.science/ai

-"The Case Against Generative AI" by Ed Zitron (and many other writings by Ed): https://www.wheresyoured.at/the-case-against-generative-ai/

-"Modern-day Oracles or Bullshit Machines? How to thrive in a ChatGPT world," a series of short lessons by Carl T. Bergstrom and Jevin D. West: https://thebullshitmachines.com/

-YouTube video: "5 GENIUS-LEVEL prompts to use ChatGPT like an AI PRO!!!!!!," a satirical video by Charalanahzard: https://www.youtube.com/watch?v=IS65dBNlng8



First version posted 4 February 2026

Substantial edits (fixed typos, added final section, added new points to the "costs") 5 February 2026

Fixing more typos, 6 February 2026. All typos the result of certified 100% human error.

 
 
 
Featured Posts
Recent Posts
Archive
Search By Tags

Add comments via Facebook

(don't have a Facebook login? Scroll down to add comments in the field below) :

bottom of page