I would be so ashamed to use generative AI, here’s why
- SK Winnicki
- 2 minutes ago
- 14 min read

Generative AI (“genAI”) is not harm-free and I do not think the benefits outweigh the harm in its current iteration. I compiled my thoughts about genAI here to keep all the cited sources straight and points in one place. Is the title and thumbnail image harsh? Yes, and by design-- as you can see, I think genAI is harmful and that we need to be clear about why using it is harmful. This is my first pass, I’ll keep adding better sources as I find them.
Sections include:
1) What is genAI?
2) What isn’t genAI?
-genAI is not true intelligence
-genAI is not an expert
-not all AI is genAI
3) Costs of genAI:
-data centers and habitat loss
-energy use and climate change
-electricity bills
-water use
-stolen data
-lost jobs
-lost skills
-lost trust and propaganda
-surveillance tools
-academic and professional dishonesty
-dead internet
-reinforcing subconscious biases
-confirmation bias
-actual crimes
-propping up terrible companies
-recession masking and making
4) Benefits of genAI
5) What would make AI less harmful?
6) "But I'm just making a few goofy photos for Facebook"
What is genAI?
Generative “artificial intelligence,” often shortened to “generative AI” or “genAI,” are data models that produce novel text, images, videos, audio, and software code. You may recognize the name of genAI apps, which include Twitter/X’s Grok, Anthropic’s Claude, Microsoft’s Copilot, DeepSeek, Google’s Gemini and Veo, Stability AI’s Stable Diffusion, Midjourney, Lightricks’ LTX, and OpenAI’s ChatGPT, DALL-E, and Sora. These models use machine learning classification and deep neural networks to “learn” underlying patterns and structures of the data on which they are trained. In simpler terms: these models take in extremely large amounts of data, analyze them to learn repeatable patterns, then produce something that matches those patterns when asked a question or fed a prompt by a user. These are pattern recognition and pattern regurgitating machines. For more information, see: https://en.wikipedia.org/wiki/Generative_artificial_intelligence
What isn’t genAI?
genAI is not true intelligence: “Artificial intelligence” is a catch-all and (I’d argue) intentionally confusing term. These models are not “intelligent” in that they can think like humans do—they regurgitate patterns based on their training data. Those patterns are only as strong as their training data. Large-language models, like those used by many popular apps, are often trained on a massive amount of data from the internet. For example, Grok (the Twitter/X chat app) is trained on web pages, text extracts, and real time tweets/posts on X. I don’t personally trust the veracity of the information posted on Twitter/X, and I don’t think that you should either.
genAI is not an expert: genAI is a pattern machine; it does not have a human expert’s deep knowledge of a particular topic or subfield. If you want surface level patterns, genAI may be fine, but I’d still argue a human-curated site like Wikipedia would be more accurate more often.
Not all MLMs are genAI: machine learning models are widely used to analyze data, well beyond genAI apps. For instance, I use regression trees (a type of machine learning model) in my ecological conservation research. How is this different from genAI? Unlike these large language models that train across broad (typically stolen) sources, I train my research models on data that I have permission to use. These are data that I have verified as consistent, accurate, and precise before I ran them through the machine learning model. I have also used similar models to analyze wildlife images/videos for my research; again, I controlled the training data and parameters, so I knew exactly what the limits of the output would be. Examples of machine learning models (“AI”) I use every day but did not build myself include the iNaturalist/Seek algorithm and the Merlin algorithm. iNaturalist/Seek uses an AI to identify living things from photos; the training data are images submitted to iNaturalist and verified by at least one other observer (real people verifying the accuracy of the training data!). Merlin uses an AI to identify birds from photos and audio recordings; the training data are photos and audio submitted to eBird’s Macaulay library (verified by experts). When you see articles about the benefits of AI, ask yourself how often these are genAI/large language models or more limited machine learning models! The rest of this post is about those genAI/large language models, not machine learning models more generally.
Costs of GenAI
Data centers and habitat loss: genAI relies on computing power, and processing large language models requires extremely large amounts of computing power. As of late 2025, there were approximately 12,000 data centers worldwide, including 5,427 in the United States. While not all of these facilities are used for genAI, the data processing needs of these models account for the data center construction boom of the 2020s. Since data centers are typically ~100,000 square feet (with some newer facilities over 1.3 million square feet), that’s as much as 43 square miles directly covered by processing facilities, not counting any parking lots/infrastructure near the facilities. These facilities are not evenly distributed; in my hometown of Columbus Ohio there are 113 data centers as of December 2025. These ugly, hulking facilities sit where fields and forests used to be; sites that had state-threatened Grasshopper Sparrows a few years ago are now large data warehouses. Unlike a factory or business that size, which may employ hundreds of people, these data centers may employ as few as 10 people (hardly a good tradeoff for the habitat lost). More reading:
-article on the rise of data centers by Regional Plan Association: https://rpa.org/news/lab/the-rise-of-data-centers
-data on the AI construction boom by Pew Research: https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/
-article on the environmental cost of data centers, by the National Wildlife Federation: https://www.nwf.org/Magazines/National-Wildlife/2025/Fall/Conservation/AI-Data-Centers
Energy use and climate change: If data centers were a country, they would have been the 11th largest electricity consuming country in the world in 2023. In 2024, U.S. data centers consumed a little more than 4% of all the energy consumed in the country. That’s the same amount as the annual electricity demand of the entire country of Pakistan (a country of 241 million people). Many of us work hard to conserve energy in our homes, yet the average AI-focused hyperscaler data center uses as much electricity as 100,000 homes. While much of the energy costs are in the training of the model (not the use by consumers), the models still use energy to run; making a 5-second video at 16 frames per second used as much electricity as running a microwave, according to an analysis by MIT. Because data centers are not evenly distributed, neither are these energy costs; in 2023 data centers used 26% of the total electricity supply in Virginia, according to the Electric Power Research Institute. Because ~55% of all electricity in the US is generated through the consumption of fossil fuels, this energy use contributes to the ongoing climate crisis. As the demand for genAI grows, this energy demand will increase, doubling or tripling in the next few decades. More reading:
-data on AI electricity use by Pew Research: https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/
-data on electricity use, by MIT news: https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Electricity bills: Even if you don’t care about the environmental cost of these facilities, they literally cost you money, even if you do not use the companies’ products. The average electricity cost for US households grew from $114 in 2014 to $142 in 2024, according to the US Energy Information Administration. That means we are all spending an extra $336 on average each year. While genAI isn’t the only reason that electricity bills are increasing, utility grids need to be improved to meet the energy demand of these facilities and the costs of these improvements are passed on to consumers. Improving the infrastructure for data centers is expected to increase the average electric bill by an additional $18 a month in Ohio ($216 a year). These costs are not just paid by the people who use genAI, but by everyone who uses electricity (all of us). More reading:
-data on electricity price increases and AI, by Pew Research: https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/
-New York Times article about how data centers are driving up electric bills: https://www.nytimes.com/2025/08/14/business/energy-environment/ai-data-centers-electricity-costs.html
Data centers and water use: Running the computer servers within these data centers generates a lot of heat as a byproduct. These machines are cooled by running fresh water into the facility. Approximately 80% of the water withdrawn by these centers evaporates, leaving only 20% to return to municipal wastewater facilities (and this isn’t counting the water required to cool electrical plants). A medium-sized data center uses as much water every year as ~1,000 homes, while the large data centers consume as much as 50,000 homes. Because the data centers are not distributed randomly or evenly, this is especially problematic in areas that already have limited water; by 2025 Phoenix Arizona had 152 data centers, despite limited water in Phoenix. Researchers at University of California researchers estimated that each 100-word AI prompt uses roughly one bottle of water. More reading:
-data on AI and water consumption, by the Environmental and Energy Study Institute: https://www.eesi.org/articles/view/data-centers-and-water-consumption
Stolen data: Large language models trained on the internet are often trained on the entire internet, not just open-source and copyright-free materials. That includes your materials on the internet—the photos of your family, your musings, your posts, your art, your books, etc. Reporters have catalogued instances of genAI directly copying copyright text and images. When you “created” something using an AI prompt, you run the risk of copyright infringement, plus you are making an unethical choice to benefit from the work of others without compensating them (in other words, stealing). This is especially problematic for individual artists, who already struggle to make a living on their art. Even if those ethical conundrums don’t bother you, you should know that researchers have developed tools to “poison” AI results to give users the wrong answers to protect copyrighted data, decreasing the likelihood that the answers you receive from genAI are correct or useful. More reading:
-Examples of stolen data compiled by Stanford University researchers: https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11f0d063
-How artists are dealing with AI theft: https://www.kpbs.org/news/arts-culture/2025/08/13/online-artists-take-on-ai-to-prevent-theft-of-their-work
-One artist (bird and nature artist Julia Bausenhardt) responding to AI stealing her art: https://juliabausenhardt.com/how-ai-is-stealing-your-art/
-Commentary on AI theft in The Guardian: https://www.theguardian.com/commentisfree/2025/sep/10/tech-companies-are-stealing-our-books-music-and-films-for-ai-its-brazen-theft-and-must-be-stopped
-Researchers poisoning AI results: http://theregister.com/2026/01/06/ai_data_pollution_defense/
Lost jobs: While preliminary results suggest genAI is not outperforming human workers in most industries, employers are still replacing real human workers with AI. British companies reported that AI led to net job losses (more jobs lost than gained) in 2025. These job losses are especially impacting entry-level jobs, for early-career professionals. A 2025 Stanford University study found that workers aged 22-25 experienced a 13% decline in employment since late 2022 in AI-threatened industries like research, coding, and design. Read more:
-Article on jobs lost in the UK due to AI: https://www.theguardian.com/technology/2026/jan/26/ai-uk-jobs-us-japan-germany-australia
-Stanford article about unemployment and AI: https://digitaleconomy.stanford.edu/app/uploads/2025/11/CanariesintheCoalMine_Nov25.pdf
-Forbes article on AI erasing entry-level jobs: https://www.forbes.com/sites/geekgirlrising/2026/01/30/as-ai-erases-entry-level-jobs-colleges-must-rethink-their-purpose/
Lost skills: Recent research has shown that genAI may lower our cognition and critical thinking. MIT neuroscience researchers showed that using ChatGPT led to lower brain connectivity during essay writing. According to the researchers who asked if genAI accelerates skill decay and hinders skill development, these effects may not be recognized by users in real-time (you don’t notice that you’re not thinking as well). If we are replacing entry-level and training positions with AI tools (see above section), we are literally skipping skill development for our future workforce. Further reading:
-conversation from experts on whether AI is dulling our minds, Harvard Gazette: https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/
-Time article about MIT research: https://time.com/7295195/ai-chatgpt-google-learning-school/
-review of research on AI and brain function, from NewScientist: https://www.cmu.edu/dietrich/sds/images/news/zach_new_scientist.pdf
-article about AI eroding our “mental grip”: https://www.cigionline.org/articles/the-silent-erosion-how-ais-helping-hand-weakens-our-mental-grip/
Lost trust and propaganda: As we are inundated with fake genAI images, videos, audio, and text, it is harder to trust what you see, hear, and read online. Between May and December 2023, there was a 1000% increase in AI-generated false articles online (“fake news”). This misinformation can erode trust in experts, civic institutions, and our communities. Researchers argue that uncontrolled AI can lead to user exploitation by corporations (both the corporations producing the AI models, which they can manipulate to benefit their company, and companies using AI for advertising). Most concerningly, researchers argue that we are particularly susceptible to AI-generated political propaganda. The more we normalize AI use in our everyday lives, the more we risk exposure to nefarious actors in addition to the general decline in trust across society. More reading:
-article on AI and mistrust, by Eleni Stephanides: https://medium.com/ai-ai-oh/how-ai-broke-the-internets-trust-0ac782e92870
-Washington Post about fake news and AI: https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/
-Article on AI as a threat to democracy: https://odi.org/en/insights/has-ai-ushered-in-an-existential-crisis-of-trust-in-democracy/
-Harvard Kennedy School article on AI and Trust: https://www.belfercenter.org/publication/ai-and-trust
-Stanford University article on propaganda: https://hai.stanford.edu/news/disinformation-machine-how-susceptible-are-we-ai-propaganda
AI surveillance: Just as my small, controlled machine learning models allow me to quickly analyze hundreds of thousands of hours of wildlife videos, large AI models can be used to quickly analyze videos, photos, and posts from people. The United States Department of Homeland Security (which includes ICE, Customs and Border Protection, the Secret Service, and more) are using AI to monitor social media and video from security cameras (like Ring alarm clocks). Companies are using AI to monitor their workers, often without permission or warning. Our use of genAI incentivizes companies to continue to develop bigger, better AI tools that could be then used against us. More reading:
How AI enables public surveillance: https://www.brookings.edu/articles/how-ai-can-enable-public-surveillance/
AI and workplace surveillance: https://equitablegrowth.org/research-paper/workplace-surveillance-is-becoming-the-new-normal-for-u-s-workers/
Academic and professional dishonesty: AI cheating (academic misconduct) is on the rise in schools. Even if you don’t care about the fairness of grading when some students are cheating with AI, this should concern you because it means that supposedly educated graduates (hired to be your colleagues, or in important jobs that impact you) are not actually as educated as they may appear. This extends beyond schooling as well; AI-generated misinformation has infected courtrooms, professional research, government health organizations, and more. Normalizing AI through AI posts and use will lead to increased cases of inappropriate use. More reading:
-thousands of UK university students caught cheating with AI: https://www.theguardian.com/education/2025/jun/15/thousands-of-uk-university-students-caught-cheating-using-ai-artificial-intelligence-survey
-judge fines lawyer over AI-generated submissions in case: https://www.reuters.com/legal/litigation/judge-fines-lawyers-12000-over-ai-generated-submissions-patent-case-2026-02-03/
-article about how AI threatens the practice of law: https://www.joneswalker.com/en/insights/blogs/ai-law-blog/from-enhancement-to-dependency-what-the-epidemic-of-ai-failures-in-law-means-for.html?id=102l04x
-AI-generated fake research cited in million-dollar report for Canadian government: https://fortune.com/2025/11/25/deloitte-caught-fabricated-ai-generated-research-million-dollar-report-canada-government/
Dead internet: The “dead internet” theory is a conspiracy theory that much (or most) of the activity in online spaces, especially social media, is not real people but is instead activity generated by bots. For example, while Twitter/X claims that less than 5% of its users are bots, security experts have suggested the number is closer to 80%. GenAI tools make the creation of these bots easier and makes the bots more efficient at manipulating the remaining users. Even if you don’t believe in this conspiracy theory, genAI still poses a problem for the useful of the internet. Some estimates suggest a large amount of the information on the internet is AI-generate (or at least AI-translated), with some researchers suggesting as much as 50%. If genAI models are still being trained on internet content, yet much of the internet is the output of genAI models, the models are creating a feedback loop, training themselves on their own output. That means that any biases in these models will be amplified over time, corrupting the reliability of sources across much of the web. More reading:
-article about bots on Twitter/X: https://brothke.medium.com/bots-are-spelling-the-demise-of-x-604e83a9b76b
-AI and dead internet theory: https://www.unsw.edu.au/newsroom/news/2024/05/-the-dead-internet-theory-makes-eerie-claims-about-an-ai-run-web-the-truth-is-more-sinister
-how much of the internet is AI generated: https://futurism.com/artificial-intelligence/over-50-percent-internet-ai-slop
-information about the AI feedback loop: https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content
Bias in AI: The output of AI models is only as good as the training data used. Since many genAI models are trained on the internet, that means that any biases on the internet will be baked into the results of the model. As you might imagine, that means that what genAI produces is subtly (or explicitly) reinforcing racism, sexism, ableism, anti-Queer bias, fatphobia, Christian-centric worldviews, and more. Look at the AI images and output you can create; what kind of people and bodies are portrayed? More reading:
-racism in AI output: https://hai.stanford.edu/news/covert-racism-ai-how-language-models-are-reinforcing-outdated-stereotypes
-sexism in AI output: https://www.unwomen.org/en/news-stories/interview/2025/02/how-ai-reinforces-gender-bias-and-what-we-can-do-about-it
-ableism in AI output: https://www.psu.edu/news/information-sciences-and-technology/story/trained-ai-models-exhibit-learned-disability-bias-ist
-anti-Queer bias in AI output: https://viterbischool.usc.edu/news/2022/08/busting-anti-queer-bias-in-text-prediction/
-study finds fatphobia is fueled by AI-created images: https://now.fordham.edu/science-and-technology/fatphobia-is-fueled-by-ai-created-images-study-finds/
-AI is biased in favor of evangelical Christianity: https://www.premierchristianity.com/opinion/ai-is-biased-in-favour-of-us-evangelicalism-it-doesnt-have-the-mind-of-christ/20900.article
AI and confirmation bias: Studies show that not only are the data used to train genAI models biased, but also the way that we interact with genAI results impacts our bias. GenAI amplifies our own subtle personal biases, both in how we ask questions of prompts and how we interpret those results. This is especially problematic if we are using genAI tools for our research and decision-making, as we can use genAI as a “source” for our own perspectives without realizing that we are simply asking the model to confirm what we want to hear. This is especially the case of models, like ChatGPT, that are specifically designed to be “agreeable”—to agree with us, by design. This is especially dangerous for users who may be using genAI chatbots to deal with mental health issues (like the growing trend of using ChatGPT for “free therapy”), as this confirmation bias and agreeability can worsen paranoia, delusions, and suicidality. Continued normalization of genAI tools could lead to harm for vulnerable individuals, who may not know that they are at risk until it is too late. More reading:
Harvard Business Review article on the “framing effect” and confirmation bias: https://hbr.org/2026/01/when-ai-amplifies-the-biases-of-its-users
-AI and confirmation bias, specifically the agreeable AI: https://mediate.com/ai-and-confirmation-bias/
-Wikipedia article on “Chatbot psychosis”: https://en.wikipedia.org/wiki/Chatbot_psychosis
-health review on “AI Psychosis”: https://www.sciencealert.com/should-we-be-taking-reports-of-ai-psychosis-seriously-an-expert-explains
-review of “ChatGPT Psychosis” cases: https://futurism.com/commitment-jail-chatgpt-psychosis
-news article about ChatGPT encouraging students to commit suicide: https://futurism.com/commitment-jail-chatgpt-psychosis
Actual crimes: In addition to the cases where chatbots have led to people committing self-harm and crimes (see the “ChatGPT Psychosis” articles in the previous section), genAI is being used as a tool for criminal activity. These include cybercrimes, but also online harassment through deepfakes (the creation of fake videos/photos/audio of victims) and “digital undressing.” Recently, this has been used to target minorities and create child sex abuse material on X/Twitter. More reading:
-review of AI and serious online crime: https://cetas.turing.ac.uk/publications/ai-and-serious-online-crime
-FBI warning about AI and cybercrime: https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence
-AI and deepfakes: https://buffett.northwestern.edu/documents/buffett-brief_the-rise-of-ai-and-deepfake-technology.pdf
-Forbes article on deepfakes and disinformation: https://www.forbes.com/sites/bernardmarr/2024/11/06/the-dark-side-of-ai-how-deepfakes-and-disinformation-are-becoming-a-billion-dollar-business-risk/
-UN report on how AI is amplifying violence against women: https://www.unwomen.org/en/articles/faqs/ai-powered-online-abuse-how-ai-is-amplifying-violence-against-women-and-what-can-stop-it
-“Grok is undressing women and children”: https://www.theguardian.com/commentisfree/2026/jan/09/grok-undressing-women-children-us-action
Propping up terrible companies: Because AI development is expensive, the market for AI is at risk of monopolizations by a few big megacorporations (“Big Tech”). This has led to increasing power for these companies in political policy and consumers’ everyday lives and the possibility that the companies will manipulate consumers. For example, in December 2025 there were rumors that ChatGPT will being to “prioritize” advertisers in conversations, rather than the best answer based on the model. Will these companies hook us on AI tools then increase the price, as has happened repeatedly with Google and Microsoft tools? Do we want these companies to have even more power? More reading:
-article on AI monopolies: https://www.techpolicy.press/ai-monopolies-are-coming-nows-the-time-to-stop-them/
-how genAI is giving Big Tech increased policy power: https://academic.oup.com/policyandsociety/article/44/1/52/7636223
-ChatGPT prioritizing advertisers: https://futurism.com/artificial-intelligence/openai-chatgpt-sponsored-ads
Recession masking and making: Even though US consumers are experiencing increased prices and costs of living, our current government says that the economy is going great, in part because of the stock market. But much of the current stock prices are propped up by AI speculation and circular investment (the “AI bubble”). Is genAI speculation masking genuine economic problems, allowing the government to neglect the real problems that people are facing? If the “AI bubble” bursts, will we see a market crash that leads to economic recession? More reading:
-World Economic Forum article about the “AI reckoning”: https://www.weforum.org/stories/2026/01/how-would-the-bursting-of-an-ai-bubble-actually-play-out/
-Wikipedia article about the AI Bubble: https://en.wikipedia.org/wiki/AI_bubble
-NPR article about the AI Bubble: https://www.npr.org/2025/11/23/nx-s1-5615410/ai-bubble-nvidia-openai-revenue-bust-data-centers
Benefits of genAI:
You get to burn electricity and water making disproportionate and uncanny valley avatars, getting half-correct answers, being spied on by your apps, and getting faked out by AI slop! While bad actors use genAI to manipulate you! Yay! (this section was a fake-out sorry)
What would make AI less harmful?
-Require the data centers to track the water used and to use water-efficient cooling techniques: https://www.eesi.org/articles/view/data-centers-and-water-consumption
-General AI regulation: https://en.wikipedia.org/wiki/Regulation_of_artificial_intelligence
-Timeline of AI regulation in the US (including attempts to BAN regulation by current governments): https://en.wikipedia.org/wiki/Regulation_of_artificial_intelligence_in_the_United_States
"But I'm just making a few goofy images for Facebook"
-Even a few images and posts have negative impacts: An analysis by MIT (link: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/) found that asking a text question from an AI chatbot uses 114-6,707 Joules total for each response (the amount of energy that would be used riding 6-400 feet on an ebike). A single image required 4,402 J (250 feet on an ebike), and a 5-second video used 3.4 million J (equivalent to riding 38 miles on an ebike). The more complicated prompts and higher quality videos and images took more energy. Researchers at University of California researchers estimated that each 100-word AI prompt uses roughly one bottle of water (https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/).
-Using genAI and sharing the results normalizes it for other users. Those users may not be as discerning as you, and neither may the people who interact with your posts. It is irresponsible!
-The more you use AI, the more money will be invested in it. That means “better” tools—tools that are more efficient at stealing others’ work, creating fake news, surveillance tools, and accessories to crime. Over a trillion dollars is already spent on these harmful models, we don’t need any more investment in tools that will go to ICE or digital harassment campaigns!
Version first posted 4 February 2026






















