Artificial Intelligence: Using AI
Best Practice Top Tips (Anthropic, 2025)
Use of GenAI should always be undertaken judiciously and mindfully, with an awareness of both its appropriateness for the task and any guidelines or restrictions that may be imposed by module coordinators, Schools or Colleges. Below you will find best practice tips to help you determine how best to engage with GenAI tools.
[Please be advised that many AI tools capture data inputs for future training of the AI model. Be sure to review the privacy and data protection & use policies before submitting your personal data or sensitive/research data to AI tools.]
Transparency and Academic Integrity
Transparency is essential to ensure your work is not subject to a plagiarism investigation. You should never attempt to present AI generated content as your own original work. When incorporating GenAI content be sure to properly cite and reference that content according to the style designated by your school.
Disclosure |
Attribution |
---|---|
Always disclose your use of generative AI tools in your academic work, including specific tools used and their role in your process | Cite AI tools following your school's designated citation style. |
Include this information in your methodology section or as a separate disclosure statement | Clearly distinguish between AI-generated content and your original work |
Keep detailed records of your AI interactions, including prompts used and outputs received | Document any modifications made to AI-generated content |
Responsible Use
GenAI tools can help to create efficiencies in conducting a wide variety of tasks. However, use of these tools may not always be appropriate, and in some cases it may be ethically unsound or prohibited by institutional policy. This table outlines a number of ethical and prohibited uses of AI tools in academia.
Recommended Applications |
[Unethical] Uses |
---|---|
Research assistance | Direct submission of AI-generated content as your own work |
Brainstorming and ideation | Using AI to complete closed-book exams or assessments |
Writing feedback and revision suggestions | Generating false or fabricated research data (without declared purpose or methodology statement) |
Code debugging and explanation | Bypassing academic requirements designed to develop critical skills |
Learning complex concepts through interactive discussion | |
Data analysis and explanation | |
Translation and language learning support |
Quality Control and Verification
GenAI is known to hallucinate, presenting false information, made-up references, and inherent biases that may be misleading or damaging. Be sure to meticulously check the facts and verify any references presented in the output.
Content Verification |
Critical Evaluation |
---|---|
Fact-check all AI-generated information against reliable academic sources | Assess AI-generated content for potential biases or limitations |
Cross-reference AI-provided citations and references | Consider multiple AI-generated perspectives or solutions |
Verify mathematical calculations and statistical analyses | Apply domain expertise to evaluate the appropriateness of AI suggestions |
Review code outputs for accuracy and efficiency | Document any identified errors or inaccuracies |
**To view an example of a flawed output from Perplexity, please see the transcript of the prompt for the text on Sustainability in this guide, where there are referencing mistakes in the text, with some numbered citations not included in the conversation. These mistakes have been retained for the purposes of demonstration and transparency.**
Institutional Compliance
You should always have an awareness of any guidelines or policies that may govern the use of GenAI tools. You need to have an understanding of UCD's Academic Integrity Policy as well as any college, school, or module guidelines that govern the use of GenAI use in your coursework.
Policy Adherence |
Communication |
---|---|
Stay informed about your school's guidance and UCD policies on AI use | Maintain open dialogue with instructors about AI use |
Consult with faculty about AI use in each of your modules | Seek clarification when policies are unclear |
Follow department-specific guidelines | Participate in institutional discussions about AI policy |
Report any concerns or violations appropriately | Share feedback about AI integration in academic work |
The content in this section was fully created with the assistance of Claude.ai. Selected sections of the output are transcribed here with a few minor modifications. Full transcript available for download below.
References (Harvard):
Ethics of Using Generative AI Tools (Anthropic, 2025)

The ethical implications of GenAI in academia extend far beyond traditional academic integrity concerns, touching on fundamental issues of power, access, and social justice. Core considerations include:
Power and Access:
- Concentration of control in large technology corporations
- Digital divide in access to AI tools
- Potential amplification of existing academic inequalities
Example: While wealthy institutions and students have full access to premium AI tools, others face significant barriers. This digital divide risks creating two-tiered educational systems that further disadvantage marginalized communities.
Environmental Impact:
- Significant energy consumption of AI training and deployment
- Water usage in data centers
- Environmental justice concerns regarding resource distribution
Example: AI companies may not be transparent about their actual environmental impact, reporting only direct energy usage while omitting associated infrastructure and cooling costs. A single data center can use millions of gallons of water daily for cooling, often in drought-prone areas.
Labor and Attribution:
- Appropriation of intellectual and creative work without compensation
- Changing nature of academic work and job security
- Hidden labor in AI implementation and oversight
Example: The training data for Large Language Models often incorporates content without explicit creator consent, raising questions about intellectual property rights and fair compensation.
Privacy and Data Rights:
- Surveillance potential in academic environments
- Data ownership and control
- Protection of vulnerable populations
Example: Students' early academic work in AI systems could potentially affect their future professional opportunities, depending on how algorithms process their private data.
Cultural and Linguistic Equity:
- Western knowledge systems being privileged
- Marginalization of indigenous and non-Western perspectives
- Language barriers in AI tool accessibility
Example: Indigenous oral traditions are underrepresented in AI training data.
Pedagogical Impact:
- Changes to learning processes and skill development
- Effects on critical thinking and creativity
- Assessment challenges in an AI-enabled environment
Example: Students using AI to generate essay outlines may skip the crucial brainstorming and organization learning process.
These issues operate both within and beyond academia, affecting broader societal structures, knowledge production, and power dynamics. The implementation of GenAI tools requires careful consideration of not just their immediate utility, but their wider societal implications and potential to either challenge or reinforce existing inequities.
The content in this section was fully created with the assistance of Claude.ai. Selected sections of the output are transcribed here with a few minor modifications. Full transcript available for download below.
References (Harvard):
Anthropic Claude.ai (2025) Claude response to Marta Bustillo, 11 February.
Midjourney v. 6.1. Response to "ethical AI, in the style of Van Gogh" AI generated image. Midjourney Inc., 12 March 2025
Further Reading (Harvard):
AAAI Ethics and Diversity (2019): Association for the Advancement of Artificial Intelligence (AAAI). Available at: https://aaai.org/about-aaai/ethics-and-diversity/
Benjamin, R. (2019) Race after technology: abolitionist tools for the new Jim code. Cambridge, UK: Polity.
Crawford, K. (2021) The atlas of AI: power, politics, and the planetary costs of artificial intelligence. 1 edn. New Haven: Yale University Press.
D'Ignazio, C. and Klein, L. F. (2020) Data Feminism. The MIT Press.
Feenberg, A. (2002) Transforming Technology: A Critical Theory Revisited: Oxford University Press. Available at: https://doi.org/10.1093/oso/9780195146158.001.0001.
Noble, S. U. (2018) Algorithms of oppression: how search engines reinforce racism. 1 edn. New York: New York University Press.
Selwyn, N. (2019) Should robots replace teachers?: AI and the future of education. Cambridge, UK: Medford, MA.
Winner, L. (1980) 'Do Artifacts Have Politics?', Daedalus (Cambridge, Mass.), 109(1), pp. 121-136.
What is Bias in AI? ("How can data bias")

Data bias represents one of the most significant challenges in artificial intelligence development, with far-reaching consequences for how AI systems function in real-world applications. When AI algorithms learn from biased data, they inevitably reproduce and sometimes amplify these biases in their outputs and decisions.
Consider facial recognition technology, which has demonstrated significantly higher error rates for women and people with darker skin tones. This occurs because these systems were predominantly trained using datasets that over-represented white male faces (Buolamwini & Gebru, 2018). The result is a technology that works effectively for some demographic groups while performing poorly for others.
In employment contexts, resume-screening AI tools trained on historical hiring data have shown bias against female candidates for technical roles. One company discovered their AI system downgraded resumes containing words like "women's" or graduates from women's colleges simply because these patterns were underrepresented in their previous "successful hire" data (Dastin, 2018).
Healthcare algorithms can perpetuate inequalities when trained on unrepresentative medical datasets. Research has revealed that algorithms used to allocate healthcare resources systematically underestimated the needs of Black patients compared to White patients with similar health conditions because they relied on historical healthcare spending data that reflected existing disparities in access to care (Obermeyer et al., 2019).
These biases arise from multiple sources: historical prejudices embedded in training data, sampling bias that excludes certain populations, measurement errors that affect particular groups disproportionately, and even seemingly neutral design choices that have unexpected discriminatory effects.
Addressing these challenges requires diverse teams working on AI development, rigorous data collection practices, algorithmic fairness techniques, and ongoing monitoring of AI systems after deployment. Without such interventions, AI technologies risk reinforcing existing social inequalities rather than helping to overcome them.
Watch the following video featuring former UCD researcher Abeba Birhane to find out more:
*The content in this section was fully created with the assistance of Claude.ai. Selected sections of the output are transcribed here with a few minor modifications. Full transcript available for download below.
References (MLA):
Birhane, Abeba. "Battling Bias in AI." YouTube, uploaded by Lero Community, 1 Apr. 2022, https://youtu.be/5NIWGWD97YM?feature=shared.
Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018.
Dastin, Jeffrey. "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters, 10 Oct. 2018, www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
“How can data bias impact AI outputs and algorithmic decisions" prompt. Claude.ai, 20 June 2024, version 3.5, Anthropic, 26 February 2025.
Obermeyer, Ziad, et al. "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations." Science, vol. 366, no. 6464, 2019, pp. 447-53.
LIFE IN THE AGE OF ALGORITHMS - Accessible Text version
- Data are everywhere, collected 24/7 and processed in real time and at vast scale.
- Platforms seek patterns in data to try to predict and influence behaviour. [Text in drawing: Likes cats>more cats>cat food 50% off!]
- and personalize news and advertising, based on guesses about individual preferences [Text in drawing: WSJ, NYT, Fox, Buzzfeed, Daily News, Breitbart - Google, Facebook, Twitter]
- Creating the potential for us to experience different realities [Climate change is a flat-out hoax!/ Will we survive climate change?]
- Our lives are increasingly influenced by decision-making systems that build on correlations [Text in drawing: Black box: job? yes; mortgage? No; prison sentence? 10 years]
- using “artificial intelligence” software that relies on incomplete data to make generalisations [Text in drawing: incomplete/ biased data +machine learning= decision making system that amplifies bias]
- and amplifies existing biases at large scale [Text in drawing: YES! NO!]
- information systems we depend on are shaped by a tech culture marked by narrow perspectives and overconfidence [Text in drawing: meritocracy! Limited perspective! Techno utopianism!]
- That rely on mining users’ data and manipulating their behavior
- In ways that undermine our trust in news, politics, and each other
[Creator] jessica yurkofsky (2019)
[Creative commons license]: The Project Information Literacy (PIL) “Life in the Age of Algorithms” for the Information Literacy in the Age of Algorithms Report has a Creative Commons (CC) license of CC BY-NC SA.
Project Information Literacy Logo
Information about Generative AI Hallucinations (Consensus, 2025)

Definition and Causes
Generative AI hallucinations refer to instances where AI models produce inaccurate or false information that appears plausible. These hallucinations are primarily caused by biased training data, ambiguous prompts, and inaccurate model parameters, especially when combining mathematical facts with language-based context (Roychowdhury; Jančařík, and Dušek; Nambiar, and Sreedevi; Jesson, et al.) The phenomenon is prevalent in large language models (LLMs) like OpenAI's ChatGPT, which can generate eloquent yet unreliable responses (Nambiar, and Sreedevi; Jesson, et al.).
Impact and Challenges
AI hallucinations pose significant challenges across various domains, including education, digital health, and financial decision-making. In education, hallucinations can mislead learners who may struggle to distinguish between correct and incorrect information (Jesson, et al.). In digital health, hallucinations can compromise data privacy and lead to misinterpretations, affecting patient care (Templin, et al.). In financial decision-making, hallucinations can undermine the reliability of AI solutions (Roychowdhury).
Mitigation Strategies
Several strategies have been proposed to mitigate AI hallucinations:
- Consensus Strategies: Utilizing multiple chatbot models to detect inconsistencies and improve response quality (Nambiar, and Sreedevi).
- Retrieval-Augmented Generation (RAG): Enhancing structured outputs by integrating retrieval mechanisms to reduce hallucinations and improve generalization (B'Echard, and Ayala).
- Cultural Sensitivity Tests: Evaluating models with culturally nuanced tests to better understand and address hallucinations (McIntosh, et al.).
- Incapacity Awareness: Improving models' ability to recognize their limitations and refrain from providing false answers (Hou, and Ji).
- Error Communication: Implementing transparency in error communication to maintain user trust and satisfaction (Kim).
Conclusion
Generative AI hallucinations are a critical issue that affects the credibility and reliability of AI systems across various domains. Addressing this challenge requires a combination of technical solutions, such as consensus strategies and retrieval-augmented generation, as well as improvements in model awareness and error communication. These efforts are essential to enhance the trustworthiness and effectiveness of generative AI technologies.
*The content in this section was fully created with the assistance of Consensus. Selected sections of the output are transcribed here with a few minor modifications. Full transcript available for download below.
References (MLA):
B'Echard, Patrice and Orlando Marquez Ayala. "Reducing Hallucination in Structured Outputs Via Retrieval-Augmented Generation." 2024, pp. 228-38, doi:10.48550/arXiv.2404.08189.
"computer hallucinating about research" prompt. 18 March 2025. 20 February 2025 version, Adobe Inc., https://firefly.adobe.com/public/t2i?id=urn%3Aaaid%3Asc%3AEU%3Ae5ae157e-8d0a-401e-ba77-4572344d2d27&ff_channel=shared_link&ff_source=Text2Image
"Concise information about Generative AI Hallucinations" prompt. 20 February 2025 version. Consensus NLP, 26 February, https://consensus.app/results/?q=concise%20information%20about%20Generative%20AI%20hallucinations
Hou, Wenpin and Zhicheng Ji. "Geneturing Tests Gpt Models in Genomics." bioRxiv, 2023, doi:10.1101/2023.03.11.532238.
Jančařík, Antonín and Ondřej Dušek. "The Problem of Ai Hallucination and How to Solve It." European Conference on e-Learning, 2024, doi:10.34190/ecel.23.1.2584.
Jesson, A. et al. "Estimating the Hallucination Rate of Generative Ai." ArXiv, vol. abs/2406.07457, 2024, doi:10.48550/arXiv.2406.07457.
Kim, Hayoen. "Investigating the Effects of Generative-Ai Responses on User Experience after Ai Hallucination." 2024: Proceedings of Social Science and Humanities Research Association (SSHRA), 2024, doi:10.20319/icssh.2024.92101.
McIntosh, Timothy et al. "A Culturally Sensitive Test to Evaluate Nuanced Gpt Hallucination." IEEE Transactions on Artificial Intelligence, vol. 5, 2024, pp. 2739-51, doi:10.1109/TAI.2023.3332837.
Mukherjee, A. and Hannah Chang. "The Creative Frontier of Generative Ai: Managing the Novelty-Usefulness Tradeoff." ArXiv, vol. abs/2306.03601, 2023, doi:10.48550/arXiv.2306.03601.
Nambiar, Jyothika Prakash and A. Sreedevi. "Orchestrating Consensus Strategies to Counter Ai Hallucination in Generative Chatbots." 2023 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM), 2023, pp. 148-52, doi:10.1109/CCEM60455.2023.00030.
Roychowdhury, Sohini. "Journey of Hallucination-Minimized Generative Ai Solutions for Financial Decision Makers." Proceedings of the 17th ACM International Conference on Web Search and Data Mining, 2023, doi:10.1145/3616855.3635737.
Templin, Tara et al. "Addressing 6 Challenges in Generative Ai for Digital Health: A Scoping Review." PLOS Digital Health, vol. 3, 2024, doi:10.1371/journal.pdig.0000503.
Useful tools for checking AI references that may be hallucinations
- OpenAlexOpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria.
- Dimensions AI[Dimensions AI] hosts the largest collection of interconnected global research data, including over 70% of publications with full-text indexing.
- Lens.orgThe Lens serves integrated scholarly and patent knowledge as a public good to inform science and technology enabled problem solving.
Overview

AI, especially generative AI, has taken the world of academia by storm and is forcing everyone involved into some form action. Copyright has not been spared this generative AI revolution, everyone has had to stop and think what this means for copyright, copyright laws and legislation. There are copyright laws that have been in different jurisdictions covering all matter of created works including computer generated works. However, the question arises, whether LLM generated works can be classified as computer generated works and if the existing copyright laws fully apply to AI.
Copyright Ownership in AI generated works
For one to claim copyright over a created work, there must be a significant amount of work done by them in the creation of the work. If one puts a command in a Large Language Model and it creates a document or image on their behalf without further input from them, then they can’t claim copyright on the item. However, if AI is used as a tool maybe to edit or proofread an already created piece of content, the creator can retain copyright over the content. There must be a significant amount of work contributed by a human with the assistance of GenAI for that work to fall under copyright. Human authorship is key in making AI work copyrightable. Creators need to be mindful of how they are using AI when creating works. AI should be used as a tool to enhance work created by a human and not as a creator of work bypassing human critical thinking.
Infringement – how to use copyrighted works
Acts of reproduction (even temporary) may infringe copyright if they do not fall under exceptions like the mandatory temporary copying exception under Article 5(1) of the Copyright and Information Society Directive. There are questions around copyright infringement when training LLMs, the content used to train the LLMs can be copyrighted works and the programmers of LLMs are encouraged to follow copyright rules when doing so. Many publishers do not allow researchers to use their copyright subscription content for text and data mining under the library's subscription agreements. Please check with the individual publishers and/or individual database terms and conditions to check if text and data mining is permitted before you do so.
Orphan Works and Out-of-Commerce Works: Legal challenges arise with works where the rights holder cannot be identified or located, complicating rights clearance for Text and Data Mining (TDM).
Exceptions
Because traditional copyright laws do not fully cater for AI, mainly because most of them were enacted before the existence of generative AI, there are some exceptions in different jurisdictions that have been put in place to allow for responsible use and programming of AI and minimise instances of copyright infringement. These exceptions are known as TDM exceptions, and they work in a similar way to principles like fair use, fair dealing and copyright exceptions. These prevent copyright law from unintentionally banning AI progress while encouraging responsible use. Without them, training AI models would be legally risky, expensive, or monopolized by a few players.
TDM Exceptions in different jurisdictions
IRELAND & EU
-
Mandatory TDM Exception: The proposed directive includes a mandatory exception allowing research organizations to perform TDM on works they have lawful access to, for scientific research purposes.
- The exception is limited to research organizations (e.g., universities, research institutes) and does not extend to commercial entities.
Comparison with Other Jurisdictions
-
U.S. Fair Use Doctrine: The U.S. approach is more flexible mainly because of its Fair Use approach when dealing with copyrighted material allowing TDM under the fair use doctrine, which considers factors like the purpose of use and whether it adds value to the original work.
- UK TDM Exception: The UK has a specific exception for TDM for non-commercial research, allowing lawful access to works for computational analysis.
References (APA):
European Parliament: Directorate-General for Internal Policies of the, U., & Rosati, E. (2018). The exception for text and data mining (TDM) in the proposed Directive on Copyright in the Digital Single Market – Technical aspects. European Parliament. https://doi.org/doi/10.2861/480649
OpenArt AI (2025) OpenArt (Flux.1) [AI image generator] https://openart.ai/.
Peukert, A. (2024). Copyright in the artificial intelligence act – A primer. GRUR International (Print), 73(6), 497-509. https://doi.org/10.1093/grurint/ikae057
Quintais, J. P. (2025). Generative AI, copyright and the AI Act. Computer Law & Security Review, 56, 106107. https://doi.org/https://doi.org/10.1016/j.clsr.2025.106107
U.S. Copyright Office (2025). Artificial Intelligence Study. U.S. Copyright Office. https://www.copyright.gov/policy/artificial-intelligence/
The Environmental Impact of Generative AI in Academic and Professional Settings9

Generative AI (GenAI) has rapidly become a powerful tool in academic and professional environments, but its widespread adoption has significant environmental consequences. The primary environmental impacts of GenAI stem from its substantial energy consumption and associated carbon emissions, as well as its water usage and e-waste generation.
Energy consumption is a major concern, as training and deploying GenAI models requires immense computational power. For instance, training a single large language model can consume thousands of megawatt-hours of electricity, equivalent to the annual electricity consumption of several hundred Irish households. In Ireland, the median residential electricity consumption was approximately 3,174 kWh in 20232. This demand is expected to grow exponentially, potentially contributing significantly to Ireland's total electricity consumption, which reached 31.5 TWh in 20234
Water consumption is another critical issue. GenAI systems require substantial amounts of water for cooling data centers, which can exacerbate water stress in already vulnerable regions. This impact is particularly concerning in areas prone to drought.
Carbon Emissions: The electricity consumption of GenAI directly contributes to carbon dioxide emissions. For instance, the training process of a single large language model can emit as much as 284 tonnes of CO2. In addition, the energy used for model inferencing** has been found to have a greater impact on emissions than model training or customization.This impact is expected to grow, with the Information and Communications Technology industry potentially accounting for 14% of global emissions by 2040.
The rapid growth of GenAI also contributes to e-waste generation. As the technology advances, older hardware becomes obsolete more quickly, leading to an increase in electronic waste.
To mitigate these environmental impacts, researchers and companies are exploring solutions such as using more renewable energy, implementing sustainable data center construction, and optimizing computation scheduling. In Ireland, where wind power accounts for a significant portion of electricity generation6, integrating GenAI with renewable energy sources could help reduce its carbon footprint.
As students and staff engage with GenAI tools, it's important to be aware of these environmental implications and to use these technologies responsibly. Consider the necessity of each use and explore ways to minimize the environmental footprint of GenAI applications in academic and professional settings.
**Model inferencing is the process of using a trained LLM to generate responses or make predictions based on new input data.
Sustainability Information about the creation of this text, according to Perplexity.ai
Electricity used to generate this chat: Approximately 0.0003 kWh
Water consumed: Approximately 0.3 mL
CO2 emissions: Approximately 0.15 g
To put this in perspective, the energy used is roughly equivalent to running a 60W light bulb for 18 seconds. The water consumed is about 1/15th of a teaspoon. The CO2 emissions are equivalent to driving an average car for about 0.6 meters (2 feet). While these numbers may seem small, they highlight how even brief interactions with AI can have cumulative environmental impacts when scaled to millions of users.
References (Chicago):
1. Central Statistics Office. “Metered electricity consumption 2021.” Online ISSN: 2712-018X, Dublin, May 3, 2022. https://www.cso.ie/en/releasesandpublications/ep/p-mec/meteredelectricityconsumption2021/
2. Central Statistics Office. “Metered electricity consumption 2023. Key findings.” Online ISSN: 2712-018X, Dublin, July 23, 2024. https://www.cso.ie/en/releasesandpublications/ep/p-mec/meteredelectricityconsumption2023/keyfindings/
3. Social Justice Ireland. “Metered electricity consumption 2023." August 12, 2024. https://www.socialjustice.ie/article/metered-electricity-consumption-2023
4. Enerdata. "Ireland energy information." Accessed February 24, 2025. https://www.enerdata.net/estore/energy-market/ireland/
5. Sustainable Energy Authority of Ireland. "Energy use overview." Accessed February 24, 2025. https://www.seai.ie/data-and-insights/seai-statistics/energy-use-overview
6. International Energy Authority. "Ireland. Electricity." Accessed February 24, 2025. https://www.iea.org/countries/ireland/electricity
7. EirGrid. "Real time system information." Accessed February 24, 2025. https://www.eirgrid.ie/grid/real-time-system-information
8. Wikipedia. "Energy in Ireland." Accessed February 24, 2025. https://en.wikipedia.org/wiki/Energy_in_Ireland
9. Perplexity AI Assistant (2025) Perplexity response to Marta Bustillo, 24 February.
10. Midjourney Inc. (2025) Midjourney response to Marta Bustillo. AI generated image. 12 March 2025
*The content in this section was fully created with the assistance of Perplexity AI Assistant. Selected sections of the output are transcribed here with a few minor modifications. Full transcript available for download below. Note that there are referencing mistakes in this text, with some numbered citations not included in the conversation. The mistakes have been left for the purposes of transparency.
- Perplexity AI output Sustainability of GenAI tool use 24-02-2025Full conversation about AI sustainability with two different prompts and follow up question about the Irish context. Includes example of prompt engineering framework used.
Further Reading:
Heikkilä, M. (2023, December 5). AI’s carbon footprint is bigger than you think. MIT Technology Review.https://www.technologyreview.com/2023/12/05/1084417/ais-carbon-footprint-is-bigger-than-you-think/
Hosseini, M., Gao, P., & Vivas-Valencia, C. (2025). A social-environmental impact perspective of generative artificial intelligence. Environmental Science and Ecotechnology, 23, 100520. https://doi.org/10.1016/j.ese.2024.100520
IEA (2025), Energy and AI, IEA, Paris https://www.iea.org/reports/energy-and-ai, Licence: CC BY 4.0
Luccioni, A. S., Jernite, Y., & Strubell, E. (2023). Power hungry processing: Watts driving the cost of AI deployment? [Paper presentation] ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024, 06/2024 Rio de Janeiro, Brazil. https://go.exlibris.link/7Yg9hzDm
Manhibi, R. and Tarisayi, K. (2024) 'The Precarious Pirouette: Artificial Intelligence and Environmental Sustainability', Acta Infologica, 8(1), pp. 51.
Ren, S. (2023, November 30). How much water does AI consume? The public deserves to know. OECD.AI Policy Observatory. https://oecd.ai/en/wonk/how-much-water-does-ai-consume
Ren, S. and Wierman, A. (2024, July 15) The uneven distribution of AI's environmental impacts. Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
United Nations environment programme (2024, September 21) AI has an environmental problem. Here's what the world can do about that. https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about
Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License