healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

AI Ethics at Unilever: From Policy to Process – MIT Sloan Management Review

Posted by timmreardon on 11/21/2023
Posted in: Uncategorized.

Thomas H. Davenport and Randy BeanNovember 15, 2023Reading Time: 9 min

Many large companies today — most surveys suggest over 70% globally — have determined that artificial intelligence is important to their future and are building AI applications in various parts of their businesses. Most also realize that AI has an ethical dimension and that they need to ensure that the AI systems they build or implement are transparent, unbiased, and fair. 

Thus far, many companies pursuing ethical AI are still in the early stages of addressing it. They might have exhorted their employees to take an ethical approach to AI development and use or drafted a preliminary set of AI governance policies. Most have not done even that; in one recent survey, 73% of U.S. senior leaders said they believe that ethical AI guidelines are important, yet only 6% had developed them.

We see five stages in the AI ethics process: evangelism, when representatives of the company speak about the importance of AI ethics; development of policies, where the company deliberates on and then approves corporate policies around ethical approaches to AI; recording, where the company collects data on each AI use case or application (using approaches such as model cards); review, where the company performs a systematic analysis of each use case (or outsources it to a partner company) to determine whether the case meets the company’s criteria for AI ethics; and action, where the company either accepts the use case as it is, sends it back to the proposing owner for revision, or rejects it.

It is only in the higher-level stages — review and action — that a company can actually determine whether its AI applications meet the transparency, bias, and fairness standards that it has established. For it to put those stages in place, it has to have a substantial number of AI projects, processes, and systems for gathering information, along with governance structures for making decisions about specific applications. Many companies do not yet have those preconditions in place, but they will be necessary as companies exhibit greater AI maturity and emphasis. 

Early Policies at Unilever

Unilever, the British consumer packaged goods company whose brands include Dove, Seventh Generation, and Ben & Jerry’s, has long had a focus on corporate social responsibility and environmental sustainability. More recently, the company has embraced AI as a means of dramatically improving operations and decision-making across its global footprint. Unilever’s Enterprise Data Executive, a governance committee, recognized that the company could build on its robust privacy, security, and governance controls by embedding the responsible and ethical use of AI into the company’s data strategies. The goal was to take advantage of AI-driven digital innovation to both maximize the company’s capabilities and promote a fairer and more equitable society. A multifunctional team was created and tasked with exploring what this meant in practice and building an action program to operationalize the objective.

Unilever has now implemented all of the five stages described above, but, looking back, its first step was to create a set of policies. One policy, for example, specified that any decision that would have a significant life impact on an individual should not be fully automated and should instead ultimately be made by a human. Other AI-specific principles that were adopted include the edicts “We will never blame the system; there must be a Unilever owner accountable” and “We will use our best efforts to systematically monitor models and the performance of our AI to ensure that it maintains its efficacy.” 

Committee members realized quickly that creating broad policies alone would not be sufficient to ensure the responsible development of AI. To build confidence in the adoption of AI and truly unlock its full potential, they needed to develop a strong ecosystem of tools, services, and people resources to ensure that AI systems would work as they were supposed to.

One Unilever policy states that any decision that has a significant life impact on an individual should not be fully automated.

Committee members also knew that many of the AI and analytics systems at Unilever were being developed in collaboration with outside software and services vendors. The company’s advertising agencies, for example, often employed programmatic buying software that used AI to decide what digital ads to place on web and mobile sites. The team concluded that its approach to AI ethics needed to include attention to externally sourced capabilities.

Developing a Robust AI Assurance Process

Early on in Unilever’s use of AI, the company’s data and AI leaders noticed that some of the issues with the technology didn’t involve ethics at all — they involved systems that were ineffective at the tasks they were intended to accomplish. Giles Pavey, Unilever’s global director of data science, who had primary responsibility for AI ethics, knew that this was an important component of an AI use case. “A system for forecasting cash flow, for example, might involve no fairness or bias risk but may have some risk of not being effective,” he said. “We decided that efficacy risk should be included along with the ethical risks we evaluate.” The company began to use the term AI assurance to broadly encompass its overview of a tool’s effectiveness and ethics.

The basic idea behind the Unilever AI assurance compliance process is to examine each new AI application to determine how intrinsically risky it is, both in terms of effectiveness and ethics. The company already had a well-defined approach to information security and data privacy, and the goal was to employ a similar approach that would ensure that no AI application was put into production without first being reviewed and approved. Integrating the compliance process into the compliance areas that Unilever already had in place, such as privacy risk assessment, information security, and procurement policies, would be the ultimate sign of success. 

Debbie Cartledge, who took on the role of data and AI ethics strategy lead for the company, explained the process the team adopted: 

When a new AI solution is being planned, the Unilever employee or supplier proposes the outlined use case and method before developing it. This is reviewed internally, with more complex cases being manually assessed by external experts. The proposer is then informed of potential ethical and efficacy risks and mitigations to be considered. After the AI application has been developed, Unilever, or the external party, runs statistical tests to ascertain whether there is a bias or fairness issue and could examine the system for efficacy in achieving its objectives. Over time, we expect that a majority of cases can be fully assessed automatically based on information about the project supplied by the project proposer.

Depending on where within the company the system will be employed, there also might be local regulations for the system to comply with. All resume checking, for example, is now done by human reviewers. If resume checking were fully automated, the review might conclude that the system needs a human in the loop to make final decisions about whether to move a candidate to interview. If there are serious risks that can’t be mitigated, the AI assurance process will reject the application on the grounds that Unilever’s values prohibit it. Final decisions on AI use cases are made by a senior executive board, including representatives from the legal, HR, and data and technology departments. 

Here’s an example: The company has areas in department stores where it sells its cosmetics brands. A project was developed to use computer vision AI to automatically register sales agents’ attendance through daily selfies, with a stretch objective to look at the appropriateness of agents’ appearance. Because of the AI assurance process, the project team broadened their thinking beyond regulations, legality, and efficacy to also consider the potential implications of a fully automated system. They identified the need for human oversight in checking photos flagged as noncompliant and taking responsibility for any consequent actions. 

Working With an Outside Partner, Holistic AI

Unilever’s external partner in the AI assurance process is Holistic AI, a London-based company. Founders Emre Kazim and Adriano Koshiyama have both worked with Unilever AI teams since 2020, and Holistic AI became a formal partner for AI risk assessment in 2021. 

Holistic AI has created a platform to manage the process of reviewing AI assurance. In this context, “AI” is a broad category that encompasses any type of prediction or automation; even an Excel spreadsheet used to score HR candidates would be included in the process. Unilever’s data ethics team uses the platform to review the status of AI projects and can see which new use cases have been submitted; whether the information is complete; and what risk-level assessment they have received, coded red, yellow (termed “amber” in the U.K.), or green. 

The traffic-light status is assessed at three points: at triage, after further analysis, and after final mitigation and assurance. At this final point, the ratings have the following interpretations: A red rating means the AI system does not comply with Unilever standards and should not be deployed; yellow means the AI system has some acceptable risks and the business owner is responsible for being aware of and taking ownership of it; and green means the AI system adds no risks to the process. Only a handful of the several hundred Unilever use cases have received red ratings thus far, including the cosmetics one described above. All of the submitters were able to resolve the issues with their use cases and move them up to a yellow rating.

For leaders of AI projects, the platform is the place to start the review process. They submit a proposed use case with details, including its purpose, the business case, the project’s ownership within Unilever, team composition, the data used, the type of AI technology employed, whether it is being developed internally or by an external vendor, the degree of autonomy, and so forth. The platform uses the information to score the application in terms of its potential risk. The risk domains include explainability, robustness, efficacy, bias, and privacy. Machine learning algorithms are automatically analyzed to determine whether they are biased against any particular group. 

For leaders of AI projects, the platform is the place to start the review process.

An increasing percentage of the evaluations in the Holistic AI platform are based on the European Union’s proposed AI Act, which also ranks AI use cases into three categories of risk (unacceptable, high, and not high enough to be regulated). The act is being negotiated among EU countries with hopes for an agreement by the end of 2023. Kazim and Koshiyama said that even though the act will apply only to European businesses, Unilever and other companies are likely to adopt it globally, as they have with the EU’s General Data Protection Regulation.

Related Articles

Generative AI at Mastercard: Governance Takes Center Stage | Thomas H. Davenport and Randy Bean 

The Impact of Generative AI on Hollywood and Entertainment | Thomas H. Davenport and Randy Bean 

How Northwestern Mutual Embraces AI | Thomas H. Davenport and Randy Bean 

Action and Inaction on Data, Analytics, and AI | Thomas H. Davenport and Randy Bean 

Kazim and Koshiyama expect Holistic AI to be able to aggregate data across companies and benchmark across them in the future. The software could assess benefits versus costs, the efficacy of different external providers of the same use case, and the most effective approaches to AI procurement. Kazim and Koshiyama have also considered making risk ratings public in some cases and partnering with an insurance company to insure AI use cases against certain types of risks. 

We’re still in the early stages of ensuring that companies take ethical approaches to AI, but that doesn’t mean that it’s enough to issue pronouncements and policies with no teeth. Whether AI is ethical or not will be determined use case by use case. Unilever’s AI assurance process, and its partnership with Holistic AI to evaluate each use case about its ethical risk level, is the only current way to ensure that AI systems are aligned with human interests and well-being.

Article link: https://sloanreview.mit.edu/article/ai-ethics-at-unilever-from-policy-to-process/

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Research reveals rare metal could offer revolutionary switch for future quantum devices – Phys.org
Successful EHR Implementation Hinges on Change Management – GovCIO →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d