Generative AI Statement
This statement regarding the use of GenAI comes from the Global Health Studies Program faculty. Use of GenAI in the classes of GHS faculty is left to the individual faculty; please see your course syllabi.
Goals of GHS Learning Environment
Often there is no one solution for the issues we discuss in Global Health Studies. This is why so many of you thrive in our courses – because you seek to understand the real-world challenges of solutions rooted in ethics, theory, history, science, and centering diverse worldviews.
Building and nurturing critical thinking, listening, and reasoning skills are centered across our courses. GHS students learn to ask thoughtful questions and to create, frame, and support their arguments. You are often expected to find and use meaningful and relevant research. When reading and discussing you are challenged to listen to and learn from multiple perspectives. These skills are intrinsic to being an engaged citizen and necessary to being an ethical participant in all engagements with health.
We realize GenAI can, in some limited and very focused ways, support these skills. However, as a program which cares deeply about equity, social justice, and human, as well as planetary, well-being and flourishing, we are deeply troubled by the embrace of, and push toward, GenAI.
Among our concerns:
- GenAI is terrible for the environment[1], for it uses massive amounts of energy and drinking water[2]; for example, a Google search with GenAI uses 10 to 30 times more energy than a traditional search[3]
- GenAI is terrible for local communities[4] when data centers arrive, as energy bills shoot up[5], water use increases, and people are left without drinkable water
- GenAI is terrible for humanist values and policy
- GenAI technology is controlled by a small number of corporations who do not have the best interests of anyone but themselves in mind and the mass uptake of GenAI could accelerate our already deeply unequal world[6]
We realize GenAI can be useful for some very specific academic work; for example, some collaborations are using GenAI to preserve fragile languages[7]. There are people, then, who are using GenAI in a narrow and judicious manner to enhance learning, to build critical thinking skills, and to preserve knowledge.
However, we in GHS do not find the general use of GenAI applicable to our learning objectives for GHS students, and we are concerned about the strong reality that the use of AI is leading to a de-skilling of professions and critical thought.[8] Our three learning objectives are to 1) Develop a Strong Intellectual Foundation for Understanding Complex Global Health Problems, 2) Develop and Apply Understanding of Historical and Ethical Issues in Global Health, and 3) Develop Interdisciplinary Skills in Research, Critical Thinking, and Communication. We want our students to leave our program having developed advanced skills in communication and expression, from writing to speaking, and with the ability to apply these skills to the analysis of global public health challenges and to collaborations with communities, organizations, companies, or agencies whose priorities, knowledge or interests may differ from their own. These learning objectives are antithetical to the use of GenAI.
As people deeply engaged with thinking, learning, and teaching, we have specific concerns about the use of, and the push to use, GenAI in academia:[9]
- GenAI is plagiarism
- When we ask a question using GenAI, it takes information from sources and manipulates it to hide its origins, and though GenAI companies argue this is not plagiarism, when one takes information and presents it as one’s own, this is plagiarism
- GenAI is terrible for intellectual property
- Most GenAI platforms do not provide users with sources from where it captures information, and by doing so, GenAI is stealing vast quantities of intellectual property from the people who wrote the book, the article, the song, the poem, etc., from which the response is pulling from
- GenAI is terrible for knowledge production
- Academics build upon each other’s work to create new knowledge, and by not providing the attribution of sources, GenAI is disrupting this process
- GenAI can be terrible for learning
- GenAI encourages students to take shortcuts instead of practicing skills such as critical reading and writing, which can also undercut students’ abilities to generate their own ideas or to articulate those ideas with their own voice to wider audiences
- GenAI is terrible for critical thinking and responsible research, as it makes up citations[10] and current research suggests that people asking GenAI questions do not then click on websites that may enable them to form their own opinion[11]; this includes questions of a political nature[12]
As concerned GHS faculty, we pledge to:
- Intentionally refuse to use GenAI to grade the assignments we give.
- Intentionally refuse to use GenAI to plan our lessons or lectures.
- If one of us uses GenAI, we will narrowly use it as a tool, and we will be transparent and inform our students when we use GenAI as a tool.
[1] Jose Pablo Ortiz Partida, “What are the Environmental Impacts of Artificial Intelligence?” Union of Concerned Scientists, June 25, 2025.
[2] “How AI Uses our Drinking Water,” BBC World Service (2025).
[3] Allison Parshall, “What Do Google’s AI Answer Cost the Environment?” Scientific American (June 11, 2024).
[4] Eli Tan, “Their Water Taps Ran Dry When Meta Built Next Door,” The New York Times (July 14, 2025).
[5] Ivan Penn and Karen Weise, “Big Tech’s A.I. Data Centers are Driving Up Electricity Bills for Everyone,” The New York Times (August 14, 2025).
[6] Jennifer Harris, “We Are Witnessing the Rise of a New Aristocracy,” The New York Times (April 8, 2026).
[7] See, among other sources, a discussion in Karen Hao’s epilogue regarding some limited areas where GenAI is narrowly being used in a manner to preserve knowledge and enhance learning. Hao, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (Penguin, 2025).
[8] Sylvie Delacroix, “The Hidden Costs of ‘Helpful’ AI,” Nature 652 (April 2, 2026), 9.
[9] These five points are a summary from Ulises A. Mejias, “Artificial Intelligence as a Threat to Academic Labor,” Academe (Winter 2026), 22-27; please see this article for a longer and richer discussion of these five points.
[10] Miryam Naddaf and Elizabeth Quill, “Hallucinated Citations are Polluting the Scientific Literature. What Can be Done?” Nature 652 (April 2, 2026), 26-27.
[11] Valerie Wirtschafter and Nitya Nadgir, “Is the Politicization of Generative AI Inevitable?” Brookings Institute (October 16, 2025).
[12] Yifei Liu, Yuang Panwang, and Chao Gu, “’Turning Right?’ An Experimental Study on the Political Value Shift in Large Language Models,” Humanities and Social Sciences Communications 12 (2025), https://doi.org/10.1057/s41599-025-04465-z