AI Bill of Rights – Photo by Author – David E. Sweenor

A Strategic Guide for Business Leaders

Do you know what happened in 1789? Well, this is when the United States’ first Congress proposed twelve changes to our fledgling constitution. Two years later, in 1791, ten were ratified. These original ten constitutional amendments became known as our Bill of Rights.[1] Fast forward to 2023, and we’re at another first. Just about every board of directors (BoD) at any company worth its salt is planning or has plans to implement generative artificial intelligence (AI). The technology is simply too transformative to ignore.

However, the question of AI regulation remains front-page news, with various members of Congress calling for action. In July 2023, Amazon, Microsoft, Google, Facebook, Meta, and others agreed to comply with a voluntary set of AI guidelines set forth by the Biden administration.[2] Do they agree to follow the guidelines for the public good or merely to protect their turf and help shape future legislation to align with their interests?

As I discussed in Regulating Generative AI, the draft European Union (EU) AI Act approaches AI governance through risk-based legislation for EU member states.[3] As multinationals embed AI across their core business processes, the question of regulatory compliance will undoubtedly be the topic of conversation in many board meetings. How does the United States (US) view AI regulations?

In October 2022, the White House Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights.[4] Unlike its European counterpart, which attempts to codify a regulatory framework, the AI Bill of Rights merely puts forth a set of non-binding principles or guidelines that organizations should voluntarily adhere to. In addition to the federal guidelines, several states and local municipalities are implementing their own set of AI regulations covering everything from using AI for hiring, being able to opt-out of AI decisions, and facial recognition.[5]

Business leaders are keenly aware that generative AI will significantly alter how work gets done and are scrambling to figure out how to adopt it within their organizations. Not only are they trying to understand the different Business Applications of Generative AI, but they also need to understand how to prepare and align to various regulations and guidelines coming down the pike.[6]

Why Should Business Leaders Care?

As we are witnessing firsthand, traditional and generative AI is not just a technological marvel–it’s a business imperative. As AI permeates cross-functional business processes, its influence extends far beyond mere automation. For a business to remain competitive, the AI Bill of Rights provides a framework for business leaders to follow. Here’s why business leaders should pay attention: 

  • Reputational risks: AI can make or break a company. Remember MD Anderson’s investment in IBM Watson, resulting in a $62M loss?[7] How about Zillow’s $304M write-down?[8] Although these companies still exist, the financial losses were non-trivial. How organizations adopt and deploy AI responsibly can be a significant differentiator. Those who proactively set up AI governance boards and attempt to adhere to the principles outlined in the AI Bill of Rights will be on better footing than those who do not. Conversely, those who neglect to consider them do so at their own peril. These companies will risk public backlash, negative press, and financial losses.
  • Improved productivity: AI systems, when designed and deployed correctly, are the fuel for increased efficiency, innovation, and growth. However, left unchecked, AI can lead to operational failures, from biased algorithms that make poor business decisions to data breaches that expose sensitive information. AI biases’ are well known and documented–from performing poorly in facial recognition tasks for darker-skinned people to granting females lower credit card limits than their husbands–AI bias can pose a serious threat to a company.[9] Aligning to a framework like the AI Bill of Rights and the National Institute of Standards and Technology ensures that AI deployments are impactful and safe–creating operational efficiencies without compromising integrity.[10],[11]
  • Legal implications: As the AI regulatory landscape evolves, businesses must be ever-vigilant and stay one step ahead of the curve to avoid costly legal pitfalls. For example, healthcare insurer Cigna is currently being sued forimproperly denying claims–the claims were supposed to be reviewed by a human, but the algorithm was apparently the sole decider.[12] While non-binding, the AI Bill of Rights offers a glimpse into the potential future of AI regulation in the U.S. Aligning with these principles will better position the company should these guidelines somehow manage to make it through Congress and become law. We can also expect similar state-level regulations like the California Consumer Privacy Act (CCPA) that businesses currently have to follow.[13]
  • Competitive advantage with Gen Z and Millenials: Ethical or trustworthy AI isn’t just about avoiding risks; it’s about creating a competitive advantage. Companies prioritizing transparent, fair, and privacy-centric AI practices will differentiate themselves from their peers. After all, there’s an ever-growing segment of Millenials and Gen Zers who emphasize openness, transparency, diversity, and inclusion. “As purchasing power transfers from older to younger generations, brand trust is becoming increasingly important for businesses. Working and associating with firms they trust—companies whose brands are portrayed openly and honestly and that align with their values—is most important to Millennial and Gen Z customers.”[14]
  • Consumer and regulatory demand: In this day and age, savvy consumers are beginning to demand that AI be fair and transparent. The Federal Trade Commission (FTC) noted that “Consumers are voicing concerns about harms related to AI—and their concerns span the technology’s lifecycle, from how it’s built to how it’s applied in the real world.”[15] Additionally, regulators are demanding greater accountability from businesses. By aligning with the AI Bill of Rights, companies can demonstrate their commitment to these values.

So, now that I have your attention, what are the principles?

Unpacking the Principles: Implications for Business Strategy

The AI Bill of Rights contains five main principles. Let’s review each and their strategic business implications and recommended actions: 

1.   Safe and Effective Systems

As I argued in my post on Generative AI Ethics since AI can generate content autonomously, it can adversely affect and impact wide swaths of the population at breakneck speed.[16] Thus, it is extremely important for business leaders to prioritize the development of safe and effective systems. Failing to do so could damage reputations, open the door for lawsuits, and increase overall costs. 

So, as a business leader, what can you do?

Implement GenAIOps Governance 

All of your competitors will adopt and implement AI, so merely deploying AI isn’t enough. AI systems must be reliable, efficient, and, above all, safe. Before deployment, business leaders must invest in rigorous testing, validation, and an ongoing GenAIOps governance and monitoring process. Unlike a traditional ML Ops framework, most companies have not yet developed systems to monitor generative AI output, and vendors have not yet caught up to this emerging need. Monitoring numeric output is fairly straightforward, but monitoring words, images, videos, and music takes an entirely new set of capabilities. 

Prioritize diversity 

Ensure that you have diverse teams developing AI systems to help improve safety, efficacy, and reduce bias’. In fact, the Harvard Business Review (HBR) stated: “To combat bias in AI, companies need more diverse AI talent.”[17] The research is crystal clear on this one–diverse teams create AI systems. By prioritizing diversity and ongoing monitoring, businesses can ensure that AI systems adapt and evolve, aligning with changing business objectives and market dynamics.

Design for transparency

At its core, the U.S. has had a long-standing history promoting the freedom of the press. In fact, that’s our first amendment. Developing AI systems should be similar–they should be designed so they are open and transparent. Third parties should be able to inspect, audit, and understand how these systems work and how they make decisions (to the extent that it is technically possible). My post on Regulating Generative AI highlights how the Stanford HELM Team was able to evaluate LLM models with a standard set of criteria–a similar set of criteria should be created for AI systems.[18]

2.    Algorithmic Discrimination Protections

As previously mentioned, left unchecked, AI can discriminate against large groups of people–whether it be due to biased training data, human oversight, or computational (statistical) biases. There are a number of laws within the United States that prevent discrimination on the basis of protected classes like age, gender, sexual orientation, religious beliefs, and other factors. AI systems can amplify and perpetuate these biases. Even if it is unintentional, adverse outcomes can occur, and your company must take steps to reduce or even eliminate this possibility.

How can you mitigate this risk?

Continual monitoring and testing

As a part of your core business operations, businesses need to continually monitor and test AI stems. This includes using representative data, and protecting against proxies for demographic features, and ensuring accessibility for people with disabilities in design and development.

Red teaming

Red teaming is employed by businesses to test software security. Essentially, organizations can hire a team to attempt to break the system. These teams use all of the tools and techniques that the bad guys do with the goal of understanding how to improve the security of the overall system. AI systems should be no different–since models are frequently updated, and data drifts, they need to be continually tested.

Develop an AI compliance report 

Hold your company accountable. Similar to what we see with environment, social, and governance (ESG) or diversity equity and inclusion (DEI) initiatives, many companies have created reports that attempt to highlight their accomplishments and progress towards these initiatives as well as highlighting gaps. A similar approach should be taken with AI.

3.   Data Privacy

Data is the hottest commodity of our day. Not a day goes by when I don’t read about a data breach. And for the most part–enforcement is generally weak, and punishments are lackluster. Do we need to hold businesses more accountable for these breaches? Businesses must strike a balance between leveraging data for AI and ensuring privacy. Privacy statements alone aren’t enough–when I read my HIPPA statements, who doesn’t the doctor share my info with? This needs to be at the core of your organization’s DNA. We are quite a ways behind the EU on the data privacy front, but I’m still hopeful.

Given this, what can you do?

Privacy by design 

When it comes to AI, privacy should be a priority from the very start. That means ditching the dense legal jargon of traditional privacy policies that no one reads. Privacy needs to be woven into an AI system’s core design.

Only collect the bare minimum amount of user data required–even though it’s tempting, don’t hoover up extras unnecessarily. Give people clear and easy ways to control their information, like opt-in consent prompts and settings dashboards. Users should be fully informed how their data will be used, not finding out after the fact. Ongoing access and transparency are crucial too, so people stay empowered.

Basically, privacy protections shouldn’t be an afterthought. They need to be built in from the ground up.

Consent Management

Entities should allow the withdrawal of data access consent (where legally allowed), resulting in the deletion of user data and timely removal of their data from derived systems. We do see some progress with more and more applications providing users with the ability to delete their data.

Create a report card

Businesses should provide regular public reports describing data security lapses, breaches, ethical pre-reviews, and other pertinent data-related activities. Federal, state, and local governments should aggregate this data and put it on for public display. In the end, transparency always wins the day.

4.   Notice and Explanation

Whether you’re applying for college, insurance, or a new credit card–AI plays a key role in most of these decisions. As consumers interact with AI, transparency is a critical requirement. When a business makes a decision, the affected individual has the right to know how that decision was made and the right to appeal that decision. 

To improve transparency, leaders should:

Provide Clear Notices

Ensure that users receive timely and up-to-date notices about the use of automated systems. Notices should be clear, brief, and easily understandable. These notices should be designed in a way that they are accessible to users with diverse needs, including different languages and reading levels. Furthermore, they should be presented in multiple forms, such as on paper, physical signs, or online, to ensure maximum accessibility and understanding.

Offer Valid Explanations

Automated systems should provide technically valid, meaningful, and useful explanations tailored to the user and any operators who need to understand the system. Explanations should be calibrated based on the level of risk associated with the decision and should be presented in a manner that is both scientifically supportable and user-friendly. Where possible, error ranges or uncertainties associated with the explanation should be communicated to the user.

Public Reporting

Organizations should make public reports that include summary information about their automated systems in plain language1. These reports should assess the clarity and quality of the notices and explanations provided. Regular reporting not only promotes transparency but also demonstrates the organization’s commitment to ethical AI practices. It also provides an opportunity for external validation and feedback, ensuring continuous improvement in the system’s transparency and accountability.

5.   Human Alternatives, Consideration, and Fallback

Have you ever called a customer service number only to be stuck on one of the plateaus of Dante’s Inferno? I had an old industry analyst friend who said,”just because AI makes a decision doesn’t mean you have to follow the decision.” For any business process, organizations need to plan on having an escape plan so they can continue to operate without AI. Yes, it will be slower and more painful (similar to when credit card machines stop working), but the option needs to exist.

How can you prepare?

Allow for AI Opt-Out

AI systems should provide users with the ability to opt-out in favor of a human alternative, where appropriate. This includes giving clear, accessible notice and instructions. Opting out should be timely and not unreasonably burdensome, both in the process of requesting to opt-out and in the human-driven alternative provided.

Ensure Timely Human Consideration

In the event of a system failure or error, or if a user wants to appeal or contest its impacts, timely human consideration and remedy should be available. The availability and oversight of human consideration should be proportionate to the potential impact of the automated system.

Watch the Watchman

As my business school professor quipped on more than one occasion, “who watches the watchman?” Automated systems used in sensitive domains, such as criminal justice, employment, education, and health, should have additional human oversight. This includes tailoring the system to its purpose, providing access for oversight, training for those interacting with the system, and incorporating human consideration for high-risk decisions.


For internal stakeholders, understanding the use cases and decisions made by AI leads to better strategic decisions. However, a human always needs to be accountable, so organizations will need to build these principles into their workflows. It’s not a set-it-and-forget-it situation. Designing safe and effective decisions, preventing discrimination, building in data privacy, providing notices and explanations, and allowing for human alternatives will help set your organization on a path to success.

If you enjoyed this article, please like the article, highlight interesting sections, and share comments. Consider signing up for the newsletter at the bottom of the page as well as following me on Medium and LinkedIn.

[1] National Archives. 2023. “The Bill of Rights: A Transcription.” National Archives. The U.S. National Archives and Records Administration. April 21, 2023.

[2] Shear, Michael D., Cecilia Kang, and David E. Sanger. 2023. “Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools.” The New York Times, July 21, 2023, sec. U.S.

[3] Sweenor, David. 2023. “Regulating Generative AI.” Medium. August 8, 2023.

[4] The White House. 2022. “Blueprint for an AI Bill of Rights.” The White House. 2022.

[5] Lazzaro, Sage. 2023. “Will U.S. States Figure out How to Regulate AI before the Feds?” Fortune. September 29, 2023.

[6] ———. 2023b. “Business Applications of Generative AI.” Medium. September 12, 2023.

[7] Herper, Matthew. 2017. “MD Anderson Benches IBM Watson in Setback for Artificial Intelligence in Medicine.” Forbes. February 19, 2017.

[8] Olavsrud, Thor. 2022. “9 Famous Analytics and AI Disasters.” CIO. April 15, 2022.

[9] Bousquette, Isabelle. 2023. “Rise of AI Puts Spotlight on Bias in Algorithms.” WSJ. March 9, 2023.

[10] Vassilev, Apostol. 2023. “Powerful AI Is Already Here: To Use It Responsibly, We Need to Mitigate Bias.” NIST, February.

[11] Schwartz, Reva, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall. 2022. “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.” Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, March.

[12] “Cigna Health Giant Accused of Improperly Rejecting Thousands of Patient Claims Using an Algorithm.” 2023. AP News. July 26, 2023.

[13] State of California Department of Justice. 2023. “California Consumer Privacy Act (CCPA).” State of California – Department of Justice – Office of the Attorney General. May 10, 2023.

[14] Heyward, Chastity. 2022. “Council Post: Why Branding Your Business Is Important in 2022.” Forbes. June 16, 2022.

[15] Fondrie-Teitler, Simon, and Amritha Jayanti. 2023. “Consumers Are Voicing Concerns about AI.” Federal Trade Commission. September 30, 2023.

[16] Sweenor, David. 2023a. “Generative AI Ethics.” Medium. July 28, 2023.

[17] Li, Michael. 2020. “To Build Less-Biased AI, Hire a More-Diverse Team.” Harvard Business Review. October 26, 2020.

[18] ———. 2023b. “Regulating Generative AI.” Medium. August 8, 2023.