Key Considerations in the Age of Autonomous Content

With all the hubbub surrounding generative artificial intelligence (AI), there are an increasing number of unanswered questions about how to implement this transformative technology responsibly. This blog will review the European Union (EU) AI ethics guidelines and discuss key considerations for implementing an AI ethics framework when large language models (LLMs) are used.

Ethics Guidelines for Trustworthy AI

On 8th April 2019, the European Union put into practice a framework for the ethical and responsible use of artificial intelligence (AI). The report defines three guiding principles for building trustworthy AI:

  1. Lawful: AI should adhere to the rule of law and location regulations.
  2. Ethical: The AI system should be ethical and abide by ethical principles and values.
  3. Robust: Because AI can do significant harm to large populations in a short amount of time, it needs to be technically and socially robust.

For multinational corporations, this raises an interesting question of how they should apply this framework across geopolitical boundaries since what is considered lawful and ethical in one region of the world may not be in another. Many companies take the most stringent regulations and apply that unilaterally across all geographies. However, a “one-size-fits-most” approach may not be appropriate or acceptable.

The EU’s framework can be seen below in Figure 1.1.

Figure 1.1: AI Ethics Framework from the European Union

Based on the three foundational principles, four ethical principles and seven key requirements result. The ethical principles include:

  1. Respect for Human Autonomy: This principle emphasizes that humans should maintain control and freedom in their interactions with AI. “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans.”[1] Fundamentally, AI should support human participation in democratic processes. We’ve seen some countries implement “social scoring” on its citizens, which should be cause for concern.
  2. Prevention of Harm: AI systems should not cause physical, mental or emotional harm. Given the pervasiveness and rapid impact of AI, it’s important that AI outputs are closely monitored to prevent the inadvertent manipulation of citizens, employees, businesses, consumers and governments “due to asymetries of power or information.”[2] We’ve seen autonomous vehicle makers wrestle with this principle in what’s known as the AI Trolley Problem. Of course, this is not limited to robotic systems; people are relying on ChatGPT for medical advice and given its propensity to make things up, we need to be careful.
  3. Fairness: AI systems should be unbiased and non-discriminatory, aiming for “an equal distribution of benefits and costs.”[3] Fairness implies that human choices should not be undermined and that “AI practitioners should balance competing interests and objectives, respecting the principle of proportionality between means and ends.”[4] On the surface, this seems straight forward, but did you know there are over twenty mathematical definitions of fairness?[5]
  4. Explicability: AI systems need to be transparent, auditable, reproducible and interpretable. If AI is used to decide something that impacts you, you have a have the right to an explanation as to how that decision was made by the algorithm. For example, if you are denied credit, the operator of that AI system should be able to provide you with all of the factors that contributed to the decision. This can be problematic when “black-box” models are used — like the neural networks and general adversarial networks (GANs) that are the foundation for many LLMs.

This leads us to the seven requirements:

  1. Human Agency and Oversight: Essentially, this requirement states that AI systems should respect human rights and they should not operate entirely autonomously. AI should augment, not replace, human decisions. There should be a process for challenging AI decisions, and a human should be able to override AI decisions when necessary. This sounds nice, but when hundreds and thousands of decisions are made automatically, how can you effectively track all of them to make sure things don’t go awry?
  2. Technical Robustness and Safety: AI systems need to be secure, robust‌ and resilient against bad actors and cyber attacks. They should provide accurate predictions that are reliable and reproducible. Organizations must prioritize cybersecurity and have contingency plans for attacks and how to operate if the system goes offline. They need to pay special attention to adversarial data poisoning, where malicious actors alter training data to cause incorrect predictions.
  3. Privacy and Governance: “AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle.”[6] Developers of AI systems need to put safeguards in place to prevent malicious data or code from being fed into the system. The guidelines also emphasize that only authorized users should access an individual’s data, which must be fair, unbiased, and follow all privacy regulations throughout its lifecycle. One area that organizations need to think about is what constitutes an “authorized user”? Did you see the case of Roomba taking pictures of a woman on the commode which ended up on Facebook?
  4. Transparency: Organizations must be able to trace data lineage, understanding its source, how it was collected, transformed‌ and used. This process should be auditable, and the AI outputs should be explainable. This poses a challenge for data scientists because oftentimes, explainable models are often less accurate than “black-box” algorithms. This requirement also states that people interacting with AI should be aware that they are doing so — in other words, AI shouldn’t pretend to be human and it should be clear that we’re interacting with a bot.
  5. Diversity, Non-discrimination‌ and Fairness: AI should treat all groups equally — which may be easier said and done. The requirement suggests that designers should include people from diverse cultures, experiences‌ and backgrounds to help mitigate some of the historic bias that pervades many cultures. AI should be accessible to everyone, regardless of disability or other factors. This begs the question, what defines a “group”? There’s the obvious protected classes – age, race, color, region/creed, national origin, sex, age, physical or mental disability‌ or veteran status. Are there other factors that should be considered? If I’m an insurance company, can I charge people who have “healthier” habits less than those who are considered “unhealthy”? 
  6. Societal and Environmental Well-being: AI systems should aim to improve society, promote democracy‌ and create environmentally friendly and sustainable systems. Just because you can do something, doesn’t mean you should do something. Business leaders need to critically consider the potential societal impacts of AI. What are the costs invovled in training your AI models? Are they at odds with your environmental, social‌ and corporate governance (ESG) policies? We’ve already seen examples where social media platforms like TikTok are pushing harmful content at kids.
  7. Accountability: AI system designers should be accountable for their systems, which should be auditable and provide a way for those impacted by decisions to rectify and correct any unfair decisions. Designers may be held liable for any harm done to individuals or groups. This raises an interesting question — who is culpable of the system goes haywire? Is it the provider of the foundation model or is it the company that is using generative AI?

While these principles seem intuitive on the surface, there is “substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertainto; and how they should be implemented.”[7]

AI Ethics Considerations for LLMs

Now that we understand the EU AI Ethics guidelines, let’s delve into unique considerations for LLMs. 

In a previous blog titled GenAIOps: Evolving the MLOps Framework, I outlined three key capabilities of generative AI and LLMs which include:

  • Content Generation: Generative AI can generate human-like quality — including text, audio, images/video and even software code. Now, one should note that content generated may not be factually accurate — the onus is on the end-user to make sure the generated content is true and not misleading. Developers need to make sure that the code generated is free of bugs and viruses.
  • Content Summarization and Personalization: The ability to sift through large corpora of documents and quickly summarize the content is a strength of generative AI. In addition to quickly creating summaries of documents, emails and Slack messages, Generative AI can personalize these summaries for specific individuals or personas. 
  • Content Discovery and Q&A: Many organizations have a significant amount of content and data scattered across their organization in different data silos. Many data and analytics vendors are using LLMs and generative AI to automatically discover and connect disparate sources. End-users can then query this data, in plain language, to comprehend key points and drill-down for more detail.

Given these various capabilities, what factors do we need to consider when creating an AI Ethics Framework?

Human Agency and Oversight

Since generative AI can essentially produce content autonomously, there’s a risk that human involvement and oversight may be reduced. If you think about it, how much email spam do you receive daily? Marketing teams create these emails, load them into a marketing automation system and push the “Go” button. These run on auto-pilot and often times, are forgotten and run in perpetuity.

Given that generative AI can produce text, image, audio, video and software code at breakneck speeds — what steps can we put in place ‌to make sure that there is a human-in-the-loop; especially in critical applications? If we’re automating healthcare advice, legal advice and other more “sensitive” types of content, organizations need to think critically about how they can keep their agency and oversight over these systems. Companies need to put safeguards in place to ensure that the decisions being made align with human values and intentions.

Technical Robustness and Safety

It is well known that generative AI models can create content that is unexpected or‌ even harmful. Companies need to rigorously test and validate their generative AI models to make sure they are reliable and safe. Also, if the generated content is erroneous, we need to have a mechanism in place to handle and correct that output. The internet is full of horrible and divisive content and some companies have hired content moderators to try and review suspicious content, but this seems like an impossible task. Just recently, it was reported that some of this content can be quite a detriment to ones mental health (AP News — ​​Facebook content moderators in Kenya call the work ‘torture.’ Their lawsuit may ripple worldwide.)

Privacy and Governance

Generative AI models were trained on data that was gathered from across the internet. Many of the LLM makers do not really disclose the fine details of what data was used to train the model. Now, the models could have been trained on sensitive or private data that should not be publicly available. Just look at Samsung who inadvertently leaked proprietary data (TechCrunch — Samsung bans use of generative AI tools like ChatGPT after April internal data leak). What if generative AI generates outputs that include or resemble real, private data? According to Bloomberg Law, OpenAI was recently serveed a defamation lawsuit over a ChatGPT hallucination. 

We can certainly say that companies need to have a detailed understanding of the sources of data used to train generative AI models. As you fine tune and adapt your models using your own data, it is within your power to either remove or anonymize that data. However, you could still be at risk if the foundation model provider used data that was inappropriate for model training. If this is the case, who is liable?

Transparency

By their nature, “black-box” models are hard to interpret. In fact, many of these LLMs have billions of parameters so I would suggest that they are not interpretable. Companies should strive for transparency and create documentation on how the model works, it’s limitations, risks, and the data that was used to train the model. Again, this is easier said than done.

Diversity, Non-discrimination‌ and Fairness

Related to the above, if not properly trained and accounted for, generative AI can produce biased or discriminatory output. Companies can do their best to ensure that data is diverse and representative, but this is a tall order given that many of the LLM providers do not disclose what data was used for training. In addition to taking all possible precautions with understanding the training data used, its risks and limitations, companies need to put in place a monitoring system to detect this harmful content and a mechanism to flag, prevent its distribution and correct as necessary.

Societal and Environmental Well-being

For companies with ESG initiatives, training LLMs consumes significant amounts of compute — meaning they use quite a bit of electricity. As you begin to deploy generative AI capability, organization’s need to be mindful of the environmental footprint and seek for ways to reduce it. There are several researchers who are looking at ways to reduce model size and accelerate the training process. As this evolves, companies should at least account for the environmental impact in their annual reports.

Accountability

This will be an active area of litigation for several years to come. Who is accountable if generative AI produces harmful or misleading content? Who is legally responsible? Several lawsuits are ‌pending in the U.S. court system which will set the stage for other litigation as things move forward. In addition to harmful content, what if your LLM produces a derivative work? Was your LLM trained on copyrighted or legally protected material? If it produces a data derivative, how will the courts address this? As companies implement generative AI capability – there should be controls and feedback mechanisms put in place so a course of action can be taken to remedy the situation.

Summary

Generative AI holds immense promise in revolutionizing how things get done in the world, but its rapid evolution brings forth a myriad of ethical dilemmas. As companies venture into the realm of generative AI, it’s paramount to navigate its implementation with a deep understanding of established ethical guidelines. By doing so, organizations can harness the transformative power of AI while making sure they uphold ethical standards, safeguarding against potential pitfalls and harms.


[1] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[2] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[3] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[4] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[5] Verma, Sahil, and Julia Rubin. 2018. “Fairness Definitions Explained.” Proceedings of the International Workshop on Software Fairness – FairWare ’18. https://doi.org/10.1145/3194770.3194776.

[6] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[7] Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (9): 389–99. https://doi.org/10.1038/s42256-019-0088-2.

Categories

Latest Posts

Sign Up for Our Newsletter