OpenAI and it’s ChatGPT3 and GPT4 products have taken the world by storm since being launched in Nov of 2022 and March of 2023, respectively! There has been an explosion of interest, hype and an overabundance of information created since then. To avoid adding to the maelstrom of redundant information on this topic, here at MPower Partners we will focus on a 3-part blog post series that targets the ESG-specific dimensions of Generative AI. Today we bring you Part 1: Beyond the Hype: Understanding the Ethical and Environmental Implications of Generative AI.
Generative AI, also known as creative AI or machine creativity, refers to the use of algorithms and machine learning techniques to generate new content. GPT stands for Generative Pre-trained Transformer. In layperson terms, GPT is a type of artificial intelligence system that is trained to generate text, images, or other types of data. The “pre-trained” portion of the name refers to a system that has been taught using large amounts of data before it’s used for a specific task. The “transformer” portion refers to the architecture of the system, which allows it to process and understand large sequences of data. This makes it particularly useful for tasks that involve understanding and generating human language, such as chatbots or language translation. The “generative” part of the name means that the system can generate new, original content based on the patterns it has learned from the training data. For example, a GPT system trained on a large corpus of books could generate new passages of text that sound like they were written by a human and even in the voice of a particular author!
Clearly generative AI technology such as ChatGPT has the potential to create significant benefits across a range of industries, including:
- Healthcare: can help improve patient outcomes by generating synthetic data (artificially generated data) for medical research. This allows researchers to train machine learning models without compromising patient privacy .
- Creative industries: can be used to create new and innovative content in various creative fields. For instance, the music industry can use generative AI to create new songs, while the fashion industry can use it to create new designs .
- Gaming: can be used to create more realistic and immersive gaming experiences. For example, AI can generate dynamic and responsive in-game environments that adapt to the player’s behavior .
- Retail: can help retailers optimize their supply chains by generating demand forecasts, predicting consumer trends, and optimizing inventory management .
These are just a few examples of the potential benefits of generative AI. By leveraging this technology, organizations can streamline their operations, enhance customer experiences, and drive growth, However, Generative AI also raises concerns around environmental, social, and governance (ESG) issues.
From an environmental perspective, the use of generative AI can have a significant impact on energy consumption and carbon emissions. The computational power required for generative AI models is often supported by large data centers, which can consume vast amounts of energy. According to a report by the International Energy Agency, data centers worldwide consumed approximately 205 terawatt-hours (TWh) of electricity in 2018, which is about 1% of global electricity consumption . The report also projected that data center electricity demand will increase by another 50% by 2030. These data centers emit greenhouse gases, contributing to climate change. In terms of carbon emissions, the same report estimated that data centers were responsible for about 0.3% of global CO2 emissions in 2018. That’s roughly equivalent to the total emissions of the airline industry . While it’s difficult to precisely quantify the energy consumption and carbon emissions specifically caused by generative AI, it’s clear that as the use of AI technologies grows, so too will their environmental impact. However, there are opportunities to address these environmental concerns by investing in more sustainable computing infrastructure, such as renewable energy-powered data centers, and reducing the size of generative AI models .
An additional ESG concern is that generative AI models can also be trained on biased data, leading to further negative environmental impacts. For instance, if the training data for a generative AI model is sourced mainly from high-emissions industries, then the model may be biased towards creating content that perpetuates those industries. This could potentially lead to an increase in carbon emissions and other negative environmental outcomes . Training on biased data also raises equity and fairness concerns. For instance, if a generative AI model is trained on biased data, it may create content that perpetuates existing inequalities or stereotypes. This could have negative social and economic impacts, particularly for marginalized communities. To address these issues, it is essential to develop responsible AI frameworks that ensure fairness, equity, and transparency in generative AI .
From a social perspective, generative AI can be used to create fake images or videos (i.e. deepfakes), making it difficult to distinguish between what is real and what is not. This could lead to significant social consequences, such as the spread of misinformation and propaganda. For example, deepfakes, which are videos that are manipulated to show people saying or doing things they never did, can be created using generative AI. These can be used for malicious purposes, such as spreading false information about political candidates or inciting violence. Therefore, it is crucial to consider the ethical implications of generative AI from a social perspective .
Finally, from a governance perspective, the use of generative AI can also raise ethical concerns around the ownership and control of the generated content. Who owns the copyright of an image or text generated by an AI model? Who is responsible if a deepfake is used to defame or harm someone? These questions require careful consideration and legal frameworks to ensure fair and responsible use of generative AI technology. This highlights the importance of responsible data usage and security measures in the development and deployment of generative AI technology .
We hope to have successfully highlighted several of the benefits we see in generative AI technologies like ChatGPT, while also addressing the obvious ESG considerations and concerns. Overall, while generative AI has significant potential, it’s important to approach this technology with caution and consideration for ESG concerns. By doing so, we can leverage its benefits in a way that is sustainable and socially responsible. In our next post we will explore this final point further.
1. Chen, Y., Ma, T., Wang, C., & Luo, J. (2021). Synthetic data generation with generative adversarial networks for medical research: A review. Journal of Biomedical Informatics, 113, 103637.
2. Yang, C., Gan, Z., & Salakhutdinov, R. (2017). Improved variational autoencoders for text modeling using dilated convolutions. In Advances in neural information processing systems (pp. 2238-2246).
3. Summers, J., & Gielis, J. (2019). How generative AI will change the game development industry. In Proceedings of the IEEE Conference on Games (pp. 1-4).
4. Gu, Y., Sun, X., Liu, Y., & Guo, B. (2020). Retail demand forecasting with generative adversarial networks. International Journal of Production Research, 58(9), 2788-2807.
5. International Energy Agency. (2019). The growing role of data centers in the global energy system.
6. Brock, J. (2019). Data center CO2 emissions equivalent to airline industry. Forbes.
7. Peterson, S. (2021). The environmental impact of AI: what you need to know. World Economic Forum.
8. Mukherjee, S. (2019). The impact of artificial intelligence – Widespread job losses. Medium.
9. Mittal, S. (2020). Responsible AI: Addressing the ethical and social implications of AI. Analytics Insight.
10. Boyd, D., & Crawford, K. (2019). Critical questions for big data. Information, Communication & Society, 15(5), 662-679.
11. Kim, M. (2020). Privacy, security, and ethics in the age of AI. Forbes.