In a bold stroke, the EU’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models. The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe. While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.
Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.
If enacted, enforcement would be out of the hands of EU member states. Under the AI Act, third parties could sue national governments to compel fines. The act has extraterritorial jurisdiction. A European government could be compelled by third parties to seek conflict with American developers and businesses.
The Amended AI Act
The PDF of the actual text is 144 pages. The actual text provisions follow a different formatting style from American statutes. This thing is a complicated pain to read. I’ve added the page numbers of the relevant sections in the linked pdf of the law.
Here are the main provisions:
Very Broad Jurisdiction: The act includes “providers and deployers of AI systems that have their place of establishment or are located in a third country, where either Member State law applies by virtue of public international law or the output produced by the system is intended to be used in the Union.” (pg 68-69).
You have to register your “high-risk” AI project or foundational model with the government. Projects will be required to register the anticipated functionality of their systems. Systems that exceed this functionality may be subject to recall. This will be a problem for many of the more anarchic open-source projects. Registration will also require disclosure of data sources used, computing resources (including time spent training), performance benchmarks, and red teaming. (pg 23-29).
Expensive Risk Testing Required. Apparently, the various EU states will carry out “third party” assessments in each country, on a sliding scale of fees depending on the size of the applying company. Tests must be benchmarks that have yet to be created. Post-release monitoring is required (presumably by the government). Recertification is required if models show unexpected abilities. Recertification is also required after any substantial training. (pg 14-15, see provision 4 a for clarity that this is government testing).
Risks Very Vaguely Defined: The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy? Could this act itself be a risk to democracy? (pg 26).
Open Source LLMs Not Exempt: Open source foundational models are not exempt from the act. The programmers and distributors of the software have legal liability. For other forms of open source AI software, liability shifts to the group employing the software or bringing it to market. (pg 70).
API Essentially Banned. API’s allow third parties to implement an AI model without running it on their own hardware. Some implementation examples include AutoGPT and LangChain. Under these rules, if a third party, using an API, figures out how to get a model to do something new, that third party must then get the new functionality certified.
The prior provider is required, under the law, to provide the third party with what would otherwise be confidential technical information so that the third party can complete the licensing process. The ability to compel confidential disclosures means that startup businesses and other tinkerers are essentially banned from using an API, even if the tinkerer is in the US. The tinkerer might make their software available in Europe, which would give rise to a need to license it and compel disclosures. (pg 37).
Open Source Developers Liable. The act is poorly worded. The act does not cover free and Open Source AI components. Foundational Models (LLMs) are considered separate from components. What this seems to mean is that you canoOpen source traditional machine learning models but not generative AI.
If an American Opensource developer placed a model, or code using an API on GitHub – and the code became available in the EU – the developer would be liable for releasing an unlicensed model. Further, GitHub would be liable for hosting an unlicensed model. (pg 37 and 39-40).
LoRAEssentially Banned. LoRA is a technique to slowly add new information and capabilities to a model cheaply. Opensource projects use it as they cannot afford billion-dollar computer infrastructure. Major AI models are also rumored to use it as training in both cheaper and easier to safety check than new versions of a model that introduce many new features at once. (pg 14).
If an Opensource project could somehow get the required certificates, it would need to recertify every time LoRA was used to expand the model.
Deployment Licensing. Deployers, people, or entities using AI systems, are required to undergo a stringent permitting review project before launch. EU small businesses are exempt from this requirement. (pg 26).
Ability of Third Parties to Litigate. Concerned third parties have the right to litigate through a country’s AI regulator (established by the act). This means that the deployment of an AI system can be individually challenged in multiple member states. Third parties can litigate to force a national AI regulator to impose fines. (pg 71).
Very Large Fines. Fines for non-compliance range from 2% to 4% of a companies gross worldwide revenue. For individuals that can reach €20,000,0000. European based SME’s and startups get a break when it comes to fines. (Pg 75).
R&D and Clean Energy Systems In The EU Are Exempt. AI can be used for R&D tasks or clean energy production without complying with this system. (pg 64-65).
AI Act and US Law
The broad grant of extraterritorial jurisdiction is going to be a problem. The AI Act would let any crank with a problem about AI – at least if they are EU citizens – force EU governments to take legal action if unlicensed models were somehow available in the EU. That goes very far beyond simply requiring companies doing business in the EU to comply with EU laws.
The top problem is the API restrictions. Currently, many American cloud providers do not restrict access to API models, outside of waiting lists which providers are rushing to fill. A programmer at home, or an inventor in their garage, can access the latest technology at a reasonable price. Under the AI Act restrictions, API access becomes complicated enough that it would be restricted to enterprise-level customers.
What the EU wants runs contrary to what the FTC is demanding. For an American company to actually impose such restrictions in the US would bring up a host of anti-trust problems. Model training costs limit availability to highly capitalized actors. The FTC has been very frank that they do not want to see a repeat of the Amazon situation, where a larger company uses its position to secure the bulk of profits for itself – at the expense of smaller partners. Acting in the manner the AI Act seeks, would bring up major anti-trust issues for American companies.
Outside of the anti-trust provisions, the AI Acts’ punishment of innovation represents a conflict point. For American actors, finding a new way to use software to make money is a good thing. Under the EU Act finding a new way to use software voids the safety certification, requiring a new licensing process. Disincentives to innovation are likely to cause friction given the statute’s extraterritorial reach.
Finally, the open source provisions represent a major problem. The AI Act treats open source developers working on or with foundational models as bad actors. Developers and, seemingly, distributors are liable for releasing unlicensed foundation models – of apparently foundation model enhancing code. For all other forms of Opensource machine learning, the responsibility for licensing falls to whoever is deploying the system.
Trying to sanction parts of the tech ecosystem is a bad idea. Opensource developers are unlikely to respond well to being told by a government that they can’t program something – especially if the government isn’t their own. Additionally, what happens if GitHub and the various co-pilots simply say that Europe is too difficult to deal with and shut down access? That may have repercussions that have not been thoroughly thought through.
Defects of the Act
To top everything off, the AI Act appears to encourage unsafe AI. It seeks to encourage narrowly tailored systems. We know from experience – especially with social media – that such systems can be dangerous. Infamously, many social media algorithms only look at the engagement value of content. They are structurally incapable of judging the effect of the content. Large language models can at least be trained that pushing violent content is bad. From an experience standpoint, the foundational models that the EU is afraid of are safer than the models they are driving.
This is a deeply corrupt piece of legislation. If you are afraid of large language models, then you need to be afraid of them in all circumstances. Giving R&D models a pass shows that you are less than serious about your legislation. The most likely effect of such a policy is to create a society where the elite have access to R&D models, and nobody else – including small entrepreneurs – does.
I suspect this law will pass, and I suspect the EU will find that they have created many more problems than they anticipated. That’s unfortunate as some of the regulations, especially relating to algorithms used by large social networks, do need addressing.
I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.
Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.
At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say.
The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. “I’m getting too old to do technical work that requires remembering lots of details,” he told me. “I’m still okay, but I’m not nearly as good as I was, and that’s annoying.”
But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster.
Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”
That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”
Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.
“These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”
Foundations
Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.
It took until the 2010s for the power of neural networks trained via backpropagation to truly make an impact. Working with a couple of graduate students, Hinton showed that his technique was better than any others at getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models.
One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT. “We got the first inklings that this stuff could be amazing,” says Hinton. “But it’s taken a long time to sink in that it needs to be done at a huge scale to be good.” Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols, such as words or numbers.
But Hinton wasn’t convinced. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be made to learn.
“My father was a biologist, so I was thinking in biological terms,” says Hinton. “And symbolic reasoning is clearly not at the core of biological intelligence.
“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”
A new intelligence
For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.”
Hinton’s fears will strike many as the stuff of science fiction. But here’s his case.
As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
Compared with brains, neural networks are widely believed to be bad at learning: it takes vast amounts of data and energy to train them. Brains, on the other hand, pick up new ideas and skills quickly, using a fraction as much energy as neural networks do.
“People seemed to have some kind of magic,” says Hinton. “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”
Hinton is talking about “few-shot learning,” in which pretrained neural networks, such as large language models, can be trained to do something new given just a few examples. For example, he notes that some of these language models can string a series of logical statements together into an argument even though they were never trained to do so directly.
Compare a pretrained large language model with a human in the speed of learning a task like that and the human’s edge vanishes, he says.
What about the fact that large language models make so much stuff up? Known as “hallucinations” by AI researchers (though Hinton prefers the term “confabulations,” because it’s the correct term in psychology), these errors are often seen as a fatal flaw in the technology. The tendency to generate them makes chatbots untrustworthy and, many argue, shows that these models have no true understanding of what they say.
Hinton has an answer for that too: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.”
The difference is that humans usually confabulate more or less correctly, says Hinton. To Hinton, making stuff up isn’t the problem. Computers just need a bit more practice.
We also expect computers to be either right or wrong—not something in between. “We don’t expect them to blather the way people do,” says Hinton. “When a computer does that, we think it made a mistake. But when a person does that, that’s just the way people work. The problem is most people have a hopelessly wrong view of how people work.”
Of course, brains still do many things better than computers: drive a car, learn to walk, imagine the future. And brains do it on a cup of coffee and a slice of toast. “When biological intelligence was evolving, it didn’t have access to a nuclear power station,” he says.
But Hinton’s point is that if we are willing to pay the higher costs of computing, there are crucial ways in which neural networks might beat biology at learning. (And it’s worth pausing to consider what those costs entail in terms of energy and carbon.)
Learning is just the first string of Hinton’s argument. The second is communicating. “If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy,” he says. “But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”
What does all this add up to? Hinton now thinks there are two types of intelligence in the world: animal brains and neural networks. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.”
That’s a huge claim. But AI is a polarized field: it would be easy to find people who would laugh in his face—and others who would nod in agreement.
People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”
Which is Hinton? “I’m mildly depressed,” he says. “Which is why I’m scared.”
How it could all go wrong
Hinton fears that these tools are capable of figuring out ways to manipulate or kill humans who aren’t prepared for the new technology.
“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”
He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars.
“Look, here’s one way it could all go wrong,” he says. “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”
Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?
“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”
There are already a handful of experimental projects, such as BabyAGI and AutoGPT, that hook chatbots up with other programs such as web browsers or word processors so that they can string together simple tasks. Tiny steps, for sure—but they signal the direction that some people want to take this tech. And even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.
“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?”
Maybe not. But Yann LeCun, Meta’s chief AI scientist, agrees with the premise but does not share Hinton’s fears. “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” says LeCun. “It’s a question of when and how, not a question of if.”
But he takes a totally different view on where things go from there. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” says LeCun. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”
“Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”
Yoshua Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms, feels more agnostic. “I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about,” he says. But fear is only useful if it kicks us into action, he says: “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”
Just look up
One of Hinton’s priorities is to try to work with leaders in the technology industry to see if they can come together and agree on what the risks are and what to do about them. He thinks the international ban on chemical weapons might be one model of how to go about curbing the development and use of dangerous AI. “It wasn’t foolproof, but on the whole people don’t use chemical weapons,” he says.
Bengio agrees with Hinton that these issues need to be addressed at a societal level as soon as possible. But he says the development of AI is accelerating faster than societies can keep up. The capabilities of this tech leap forward every few months; legislation, regulation, and international treaties take years.
This makes Bengio wonder whether the way our societies are currently organized—at both national and global levels—is up to the challenge. “I believe that we should be open to the possibility of fairly different models for the social organization of our planet,” he says.
Does Hinton really think he can get enough people in power to share his concerns? He doesn’t know. A few weeks ago, he watched the movie Don’t Look Up, in which an asteroid zips toward Earth, nobody can agree what to do about it, and everyone dies—an allegory for how the world is failing to address climate change.
“I think it’s like that with AI,” he says, and with other big intractable problems as well. “The US can’t even agree to keep assault rifles out of the hands of teenage boys,” he says.
Hinton’s argument is sobering. I share his bleak assessment of people’s collective inability to act when faced with serious threats. It is also true that AI risks causing real harm—upending the job market, entrenching inequality, worsening sexism and racism, and more. We need to focus on those problems. But I still can’t make the jump from large language models to robot overlords. Perhaps I’m an optimist.
When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door.
Be sure to tune in to Will Douglas Heaven’s live interview with Hinton at EmTech Digital on Wednesday, May 3, at 1:30 Eastern time. Tickets are available from the event website.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
This week’s big news is that Geoffrey Hinton, a VP and Engineering Fellow at Google, and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years.
But first, we need to talk about consent in AI.
Last week, OpenAI announced it is launching an “incognito” mode that does not save users’ conversation history or use it to improve its AI language model ChatGPT. The new feature lets users switch off chat history and training and allows them to export their data. This is a welcome move in giving people more control over how their data is used by a technology company.
OpenAI’s decision to allow people to opt out comes as the firm is under increasing pressure from European data protection regulators over how it uses and collects data. OpenAI had until yesterday, April 30, to accede to Italy’s requests that it comply with the GDPR, the EU’s strict data protection regime. Italy restored access to ChatGPT in the country after OpenAI introduced a user opt out form and the ability to object to personal data being used in ChatGPT. The regulator had argued that OpenAI has hoovered people’s personal data without their consent, and hasn’t given them any control over how it is used.
In an interview last week with my colleague Will Douglas Heaven, OpenAI’s chief technology officer, Mira Murati, said the incognito mode was something that the company had been “taking steps toward iteratively” for a couple of months and had been requested by ChatGPT users. OpenAI told Reuters its new privacy features were not related to the EU’s GDPR investigations.
“We want to put the users in the driver’s seat when it comes to how their data is used,” says Murati. OpenAI says it will still store user data for 30 days to monitor for misuse and abuse.
But despite what OpenAI says, Daniel Leufer, a senior policy analyst at the digital rights group Access Now, reckons that GDPR—and the EU’s pressure—has played a role in forcing the firm to comply with the law. In the process, it has made the product better for everyone around the world.
“Good data protection practices make products safer [and] better [and] give users real agency over their data,” he said on Twitter.
A lot of people dunk on the GDPR as an innovation-stifling bore. But as Leufer points out, the law shows companies how they can do things better when they are forced to do so. It’s also the only tool we have right now that gives people some control over their digital existence in an increasingly automated world.
Other experiments in AI to grant users more control show that there is clear demand for such features.
Since late last year, people and companies have been able to opt out of having their images included in the open-source LAION data set that has been used to train the image-generating AI model Stable Diffusion.
Since December, around 5,000 people and several large online art and image platforms, such as Art Station and Shutterstock, have asked to have over 80 million images removed from the data set, says Mat Dryhurst, who cofounded an organization called Spawning that is developing the opt-out feature. This means that their images are not going to be used in the next version of Stable Diffusion.
Dryhurst thinks people should have the right to know whether or not their work has been used to train AI models, and that they should be able to say whether they want to be part of the system to begin with.
“Our ultimate goal is to build a consent layer for AI, because it just doesn’t exist,” he says.
Deeper Learning
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. MIT Technology Review’s senior AI editor Will Douglas Heaven met Hinton at his house in north London just four days before the bombshell announcement that he is quitting Google.
Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.
And oh boy did he have a lot to say. “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he told Will. “How do we survive that?” Read more from Will Douglas Heaven here.
Even Deeper Learning
A chatbot that asks questions could help you spot when it makes no sense
AI chatbots like ChatGPT, Bing, and Bard often present falsehoods as facts and have inconsistent logic that can be hard to spot. One way around this problem, a new study suggests, is to change the way the AI presents information.
Virtual Socrates: A team of researchers from MIT and Columbia University found that getting a chatbot to ask users questions instead of presenting information as statements helped people notice when the AI’s logic didn’t add up. A system that asked questions also made people feel more in charge of decisions made with AI, and researchers say it can reduce the risk of overdependence on AI-generated information. Read more from me here.
Bits and Bytes
Palantir wants militaries to use language models to fight wars The controversial tech company has launched a new platform that uses existing open-source AI language models to let users control drones and plan attacks. This is a terrible idea. AI language models frequently make stuff up, and they are ridiculously easy to hack into. Rolling these technologies out in one of the highest-stakes sectors is a disaster waiting to happen. (Vice)
Hugging Face launched an open-source alternative to ChatGPT HuggingChat works in the same way as ChatGPT, but it is free to use and for people to build their own products on. Open-source versions of popular AI models are on a roll—earlier this month Stability.AI, creator of the image generator Stable Diffusion, also launched an open-source version of an AI chatbot, StableLM.
How Microsoft’s Bing chatbot came to be and where it’s going next Here’s a nice behind-the-scenes look at Bing’s birth. I found it interesting that to generate answers, Bing does not always use OpenAI’s GPT-4 language model but Microsoft’s own models, which are cheaper to run. (Wired)
AI Drake just set an impossible legal trap for Google My social media feeds have been flooded with AI-generated songs copying the styles of popular artists such as Drake. But as this piece points out, this is only the start of a thorny copyright battle over AI-generated music, scraping data off the internet, and what constitutes fair use. (The Verge)
The National Institute of Standards and Technology is accepting comments on the revised document through July 14.
Updates to federal guidelines for protecting sensitive, unclassified information were unveiled yesterday, emphasizing clarifications in security requirements to better safeguard critical data.
Published by the National Institute of Standards and Technology, the revised draft changes impacted NIST SP 800-171 Rev.3, which is intended to help federal contractors understand how to protect Controlled Unclassified Information that they may handle when working with government entities.
Changes in the draft guidance for CUIinclude removing ambiguity and defining parameters in implementing cybersecurity protocols; increasing flexibility in selected security requirements; and assisting organizations to mitigate risk.
“Many of the newly added requirements specifically address threats to CUI, which recently has been a target of state-level espionage,” said Ron Ross, one of the publication’s authors and a NIST fellow. “We want to implement and maintain state-of-the-practice defenses because the threat space is changing constantly. We tried to express those requirements in a way that shows contractors what we do and why in federal cybersecurity. There’s more useful detail now with less ambiguity.”
Some of the digital assets NIST’s document covers include personal health information, critical energy infrastructure data and intellectual property. Safeguarding data that contributes to the U.S.’s critical infrastructures has been a chief priority for federal, state and local governments amid growing digital threats, with NIST a key player in helping fortify the nation’s digital security.
“Protecting CUI, including intellectual property, is critical to the nation’s ability to innovate—with far-reaching implications for our national and economic security,” Ross said. “We need to have safeguards that are sufficiently strong to do the job.”
NIST is accepting public feedback on the draft guidance until July 14. NIST stated that it anticipated introducing one more draft version of the SP 800-171 Rev. 3 before publishing a final version in 2024.
On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, DOD CIO John Sherman said.
The Pentagon’s chief information officer will take the reins of the department’s efforts to operationalize 5G communications technology for warfighters this fall, CIO John Sherman said Thursday.
For the last few years, the Office of the Undersecretary of Defense for Research and Engineering has been working to adopt 5G and future-generation wireless network technologies across the department. On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, Sherman announced at the DefenseTalks conference hosted by DefenseScoop..
“We’ve already been working left-seat and right-seat with research and engineering on this,” he said. “But we’ve got the lead as of Oct. 1 on the 5G pilots that are underway at the numerous DOD installations.”
In 2020, the Pentagon awarded contracts to multiple prime contractors to set up 5G and “FutureG” test bed projects at different military bases across the country. Each site experiments with a different way the department can utilize the technology, including creating smart warehouses enabled by 5G and bi-directional spectrum sharing.
The Pentagon’s chief information officer will take the reins of the department’s efforts to operationalize 5G communications technology for warfighters this fall, CIO John Sherman said Thursday.
For the last few years, the Office of the Undersecretary of Defense for Research and Engineering has been working to adopt 5G and future-generation wireless network technologies across the department. On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, Sherman announced at the DefenseTalks conference hosted by DefenseScoop..
“We’ve already been working left-seat and right-seat with research and engineering on this,” he said. “But we’ve got the lead as of Oct. 1 on the 5G pilots that are underway at the numerous DOD installations.”
In 2020, the Pentagon awarded contracts to multiple prime contractors to set up 5G and “FutureG” test bed projects at different military bases across the country. Each site experiments with a different way the department can utilize the technology, including creating smart warehouses enabled by 5G and bi-directional spectrum sharing.
But adopting O-RAN technology could be key for the Pentagon’s 5G and FutureG efforts, as it would allow the department to “break open that black box into different components,” he said.
Not only would the Pentagon’s focus on O-RAN bring new competition to the market and incentivize new innovation, the open architecture approach would also allow the department to experiment with new features in wireless communications, such as zero-trust security features, he noted.
As we set out to build an insurmountable S&T lead over ambitious competitors, the DoD will lean on the core principles outlined in the National Defense Science and Technology Strategy and three lines-of-efforts as the cornerstones of our work. The LOEs – a focus on the joint mission, creating and fielding capabilities at speed and scale, and ensuring the foundations for research and development – will guide the capabilities we explore, our strategic investments, and our science and technology research priorities.
A Focus on the Joint Mission
The National Defense Strategy emphasizes that future military operations must be joint, and directs us to deliver appropriate, asymmetrical capabilities to the Joint Force. Recognizing that we operate in a resource-constrained environment, the DoD will take a rigorous, analytical approach to our investments, and avoid getting mired in wasteful technology races with our competitors. Consistent joint experimentation will accelerate our capacity to convert the joint warfighting concept to capabilities.
Create and Field Capabilities at Speed and Scale
Technological innovations and cutting-edge capabilities are useless to the Joint Force when they languish in research labs. We cannot let the perfect be the enemy of the good, and we cannot let outdated processes prevent collaboration. To succeed, the DoD will need to pursue new and novel ways to bridge the valleys of death in defense innovation. We will foster a more vibrant defense innovation ecosystem, strengthening collaboration and communication with allies and non-traditional partners in industry and academia. We will operate with a sense of urgency and pursue the continuous transition of capabilities to our warfighters.
Ensuring the Foundations for Research and Development
To thrive in the era of competition, we must provide those at the forefront of S&T world-class with tools and facilities, and create an environment in which the DoD is the first choice for the world’s brightest scientific minds. By enhancing our physical and digital infrastructure, creating robust professional development programs for the workforce of today, while investing in pipelines for the workforce of tomorrow, we will ensure America is ready for the challenges of decades to come.
NIST Director Laurie Locascio discussed her agency’s plans before a House hearing, revealing major focuses on critical and emerging technologies.
Emerging technologies feature heavily in the FY2024 budget request from the National Institute of Standards and Technology,
Director Laurie Locascio discussed her agency’s budget plans before the House Committee on Science, Space and Technology Wednesday morning, covering a wide range of spending plans, including improving outdated research facilities, boosting domestic manufacturing efforts, investing in cybersecurity education and developing guidance for emerging tech systems.
A total of $995 million in science and technical funding was requested for the agency’s 2024 budget, including $68.7 million intended for new research programs. Among the emerging technologies NIST intends to focus on developing throughout the next year are artificial intelligence systems, quantum technologies and biotechnology.
“It is essential that we remain in a strong leadership position as these technologies are major drivers of economic growth,” Locascio testified.
Chief among the priority areas for AI will be developing standards and benchmarks to further aid in the technologies’ responsible development. Part of this will involve working with ally nations to foster a shared set of technical standards that promote common understanding of how AI systems should––and should not––be used. This process is crucial to keeping barriers in the international trade ecosystem low.
“The budget will provide new resources to expand the capabilities for benchmarking and evaluation of AI systems to ensure that the US can lead in AI innovation, while ensuring that we responsibly address the risks of this rapidly developing technology,” she said.
Increasing public involvement is another pillar in NIST’s strategy to help cultivate a roadmap surrounding trustworthy AI development. Locascio said that in 2024, NIST aims to apply the Artificial Intelligence Risk Management Framework’s provisions to gauge risks associated with generative AI systems like ChatGPT, as well as create a public working group to provide input specifically on the generative branch of AI technologies.
Locascio also talked about the agency’s plans for quantum information sciences. $220 million from the FY2024 proposed budget is intended for research specifically in the quantum technology field, focusing on fundamental measurement research and post-quantum cryptography development.
“We have a number of different activities that we’re doing but related to workforce development of the new cybersecurity framework and security for post quantum encryption algorithms,” she confirmed.
NIST also aims to work on developing similar standards for nascent biotechnologies. She highlighted gene editing as one of the standout topics within the biotechnology field, and discussed ongoing private sector engagement with NIST to guide the U.S. biotech sector on its product development, such as antibody-based treatments.
“Our goal is to make sure that when you’re editing the genome, you know what you did…and…you know how to anticipate the outcome,” Locascio said.
Emerging and critical technologies have steadily risen as a priority item within the Biden administration’s docket. Last week, the White House unveiled its inaugural National Standards Strategy––developed with the help of NIST––to promote its leadership in regulating how these technologies will be used in a domestic and global context.
“We’re really at a place where we need to be proactive in critical and emerging technologies, make sure that we are at the table promoting U.S. innovation and our competitive technologies and bring them to the table in the international standards forum, and represent leadership positions there as well,” Locascio said. “So NIST is really at the forefront of that.”
Next-generation technologies are poised to cause society-shaking shifts at unprecedented speed and scale. Generative AI, quantum computing, blockchain, and other technologies present novel ethical problems that “business as usual” just can’t handle. To meet these challenges, leaders need to do something different: They must talk about ethics in direct, clear terms, and they must not only define their ethical nightmares but also explain how they’re going to prevent them. To prepare for the ethical challenges ahead, companies need to ensure their senior leaders understand these technologies and are aligned on the ethical risks, perform a gap and feasibility analysis, build a strategy, and implement it. All of this requires an important shift from thinking of our digital ethical nightmares as a technology problem to a leadership problem.
Facebook, which was created in 2004, amassed 100 million users in just four and a half years. The speed and scale of its growth was unprecedented. Before anyone had a chance to understand the problems the social media network could cause, it had grown into an entrenched behemoth.
In 2015, the platform’s role in violating citizens’ privacy and its potential for political manipulation was exposed by the Cambridge Analytica scandal. Around the same time, in Myanmar, the social network amplified disinformation and calls for violence against the Rohingya, an ethnic minority in the country, which culminated in a genocide that began in 2016. In 2021, the Wall Street Journal reported that Instagram, which had been acquired by Facebook in 2012, had conducted research showing that the app was toxic to the mental health of teenage girls.
Defenders of Facebook say that these impacts were unintended and unforeseeable. Critics claim that, instead of moving fast and breaking things, social media companies should have proactively avoided ethical catastrophe. But both sides agree that new technologies can give rise to ethical nightmares, and that should make business leaders — and society — very, very nervous.
We are at the beginning of another technological revolution, this time with generative AI — models that can produce text, images, and more. It took just two months for OpenAI’s ChatGPT to pass 100 million users. Within six months of its launch, Microsoft released ChatGPT-powered Bing; Google demoed its latest large language model (LLM), Bard; and Meta released LLaMA. ChatGPT-5 will likely be here before we know it. And unlike social media, which remains largely centralized, this technology is already in the hands of thousands of people. Researchers at Stanford recreated ChatGPT for about $600 and made their model, called Alpaca, open-source. By early April, more than 2,400 people had made their own versions of it.
While generative AI has our attention right now, other technologies coming down the pike promise to be just as disruptive. Quantum computing will make today’s data crunching look like kindergarteners counting on their fingers. Blockchain technologies are being developed well beyond the narrow application of cryptocurrency. Augmented and virtual reality, robotics, gene editing, and too many others to discuss in detail also have the potential to reshape the world for good or ill.
If precedent serves, the companies ushering these technologies into the world will take a “let’s just see how this goes” approach. History also suggests this will be bad for the unsuspecting test subjects: the general public. It’s hard not to worry that, alongside the benefits they’ll offer, the leaps in technology will come with a raft of societal-level harm that we’ll spend the next 20-plus years trying to undo.
It’s time for a new approach. Companies that develop these technologies need to ask: “How do we develop, apply, and monitor them in ways that avoid worst-case scenarios?” And companies that procure these technologies and, in some cases, customize them (as businesses are doing now with ChatGPT) face an equally daunting challenge: “How do we design and deploy them in a way that keeps people (and our brand) safe?”
In this article, I will try to convince you of three things: First, that businesses need to explicitly identify the risks posed by these new technologies as ethical risks or, better still, as potential ethical nightmares. Ethical nightmares aren’t subjective. Systemic violations of privacy, the spread of democracy-undermining misinformation, and serving inappropriate content to children are on everyone’s “that’s terrible” list. I don’t care which end of the political spectrum your company falls on — if you’re Patagonia or Hobby Lobby — these are our ethical nightmares.
Second, that by virtue of how these technologies work — what makes them tick — the likelihood of realizing ethical and reputational risks has massively increased.
Third, that business leaders are ultimately responsible for this work, not technologists, data scientists, engineers, coders, or mathematicians. Senior executives are the ones who determine what gets created, how it gets created, and how carefully or recklessly it is deployed and monitored.
These technologies introduce daunting possibilities, but the challenge of facing them isn’t that complicated: Leaders need to articulate their worst-case scenarios — their ethical nightmares — and explain how they will prevent them. The first step is to get comfortable talking about ethics.
Business Leaders Can’t Be Afraid to Say “Ethics”
After 20 years in academia, 10 of them spent researching, teaching, and publishing on ethics, I attended my first nonacademic conference in 2018. It was sponsored by a Fortune 50 financial services company, and the theme was “sustainability.” Having taught courses on environmental ethics, I thought it would be interesting to see how corporations think about their responsibilities vis-à-vis their environmental impacts. When I got there, I found presentations on educating women around the globe, lifting people out of poverty, and contributing to the mental and physical health of all. Few were talking about the environment.
It took me an embarrassingly long time to figure out that in the corporate and nonprofit worlds, “sustainability” doesn’t mean “practices that don’t destroy the environment for future generations.” Instead it means “practices in pursuit of ethical goals” and an assertion that those practices promote the bottom line. As for why businesses didn’t simply say “ethics,” I couldn’t understand.
This behavior — of replacing the word “ethics” with some other, less precise term — is widespread. There’s Environmental, Social, and Governance (ESG) investing, which boils down to investing in companies that avoid ethical risks (emissions, diversity, political actions, and the like) on the theory that those practices protect profits. Some companies claim to be “values driven,” “mission driven,” or “purpose driven,” but these monikers rarely have anything to do with ethics. “Customer obsessed” and “innovation” aren’t ethical values; a purpose or mission can be completely amoral (putting immoral to the side). So-called “stakeholder capitalism” is capitalism tempered by a vague commitment to the welfare of unidentified stakeholders (as though stakeholder interests do not conflict). Finally, the world of AI ethics has grown tremendously over the last five years or so. Corporations heard the call, “We want AI ethics!” Their distorted response is, “Yes, we, too, are for responsible AI!”
Ethical challenges don’t disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively. Does sustainability advise against using personal data for the purposes of targeted marketing? When does using a black box model violate ESG criteria? What happens if your mission of connecting people also happens to connect white nationalists?
Let’s focus on the move from “AI ethics” to “responsible AI” as a case study on the problematic impacts of shifting language. First, when business leaders talk about “responsible” and “trustworthy” AI, they focus on a broad set of issues that include cybersecurity, regulation, legal concerns, and technical or engineering risks. These are important, but the end result is that technologists, general counsels, risk officers, and cybersecurity engineers focus on areas they are already experts on, which is to say, everything exceptethics.
Second, when it comes to ethics, leaders get stuck at very high-level and abstract principles or values — on concepts such as fairness and respect for autonomy. Since this is only a small part of the overall “responsible AI” picture, companies often fail to drill down into the very real, concrete ways these questions play out in their products. Ethical nightmares that outstrip outdated regulations and laws are left unidentified, and just as probable as they were before a “responsible AI” framework is deployed.
Third, the focus on identifying and pursuing “responsible AI” gives companies a vague goal with vague milestones. AI statements from organizations say things like, “We are for transparency, explainability, and equity.” But no company is transparent about everything with everyone (nor should it be); not every AI model needs to be explainable; and what counts as equitable is highly contentious. No wonder, then, that the companies that “commit” to these values quickly abandon them. There are no goals here. No milestones. No requirements. And there’s no articulation of what failure looks like.
But when AI ethics fail, the results are specific. Ethical nightmares are vivid: “We discriminated against tens of thousands of people.” “We tricked people into giving up all that money.” “We systematically engaged in violating people’s privacy.” In short, if you know what your ethical nightmares are then you know what ethical failure looks like.
Where Digital Nightmares Come From
Understanding how emerging technologies work — what makes them tick — will help explain why the likelihood of realizing ethical and reputational risks has massively increased. I’ll focus on three of the most important ones.
Artificial intelligence.
Let’s start with a technology that has taken over the headlines: artificial intelligence, or AI. The vast majority of AI out there is machine learning (ML).
“Machine learning” is, at its simplest, software that learns by example. And just as people learn to discriminate on the basis of race, gender, ethnicity, or other protected attributes by following examples around them, software does, too.
Say you want to train your photo recognition software to recognize pictures of your dog, Zeb. You give that software lots of examples and tell it, “That’s Zeb.” The software “learns” from those examples, and when you take a new picture of your dog, it recognizes it as a picture of Zeb and labels the photo “Zeb.” If it’s not a photo of Zeb, it will label the file “not Zeb.” The process is the same if you give your software examples of what “interview-worthy” résumés look like. It will learn from those examples and label new résumés as being “interview-worthy” or “not interview-worthy.” The same goes for applications to university, or for a mortgage, or for parole.
In each case, the software is recognizing and replicating patterns. The problem is that sometimes those patterns are ethically objectionable. For instance, if the examples of “interview-worthy” résumés reflect historical or contemporary biases against certain races, ethnicities, or genders, then the software will pick up on it. Amazon once built a résumé-screening AI. And to determine parole, the U.S. criminal justice system has used prediction algorithms that replicated historical biases against Black defendants.
It’s crucial to note that the discriminatory pattern can be identified and replicated independently of the intentions of the data scientists and engineers programming the software. In fact, data scientists at Amazon identified the problem with their AI mentioned above and tried to fix it, but they couldn’t. Amazon decided, rightly, to scrap the project. But had it been deployed, an unwitting hiring manager would have used a tool with ethically discriminatory operations, regardless of that person’s intentions or the organization’s stated values.
Discriminatory impacts are just one ethical nightmare to avoid with AI. There are also privacy concerns, the danger of AI models (especially large language models like ChatGPT) being used to manipulate people, the environmental cost of the massive computing power required, and countless other use-case-specific risks.
Quantum computing.
The details of quantum computers are exceedingly complicated, but for our purposes, we need to know only that they are computers that can process a tremendous amount of data. They can perform calculations in minutes or even seconds that would take today’s best supercomputers thousands of years. Companies like IBM and Google are pouring billions of dollars into this hardware revolution, and we’re poised to see increased quantum computer integration into classical computer operations every year.
Quantum computers throw gasoline on a problem we see in machine learning: the problem of unexplainable, or black box, AI. Essentially, in many cases, we don’t know why an AI tool makes the predictions that it does. When the photo software looks at all those pictures of Zeb, it’s analyzing those pictures at the pixel level. More specifically, it’s identifying all those pixels and the thousands of mathematical relations among those pixels that constitute “the Zeb pattern.” Those mathematical Zeb patterns are phenomenally complex — too complex for mere mortals to understand — which means that we don’t understand why it (correctly or incorrectly) labeled this new photo “Zeb.” And while we might not care about getting explanations in the case of Zeb, if the software says to deny someone an interview (or a mortgage, or insurance, or admittance) then we might care quite a bit.
Quantum computing makes black box models truly impenetrable. Right now, data scientists can offer explanations of an AI’s outputs that are simplified representations of what’s actually going on. But at some point, simplification becomes distortion. And because quantum computers can process trillions of data points, boiling that process down to an explanation we can understand — while retaining confidence that the explanation is more or less true — becomes vanishingly difficult.
That leads to a litany of ethical questions: Under what conditions can we trust the outputs of a (quantum) black box model? What are the appropriate benchmarks for performance? What do we do if the system appears to be broken or is acting very strangely? Do we acquiesce to the inscrutable outputs of the machine that has proven reliable previously? Or do we eschew those outputs in favor of our comparatively limited but intelligible human reasoning?
Blockchain.
Suppose you and I and a few thousand of our friends each have a magical notebook with the following features: When someone writes on a page, that writing simultaneously appears in everyone else’s notebook. Nothing written on a page can ever be erased. The information on the pages and the order of the pages is immutable; no one can remove or rearrange the pages. A private, passphrase-protected page lists your assets — money, art, land titles — and when you transfer an asset to someone, both your page and theirs are simultaneously and automatically updated.
At a very high level, this is how blockchain works. Each blockchain follows a specific set of rules that are written into its code, and changes to those rules are decided by whomever runs the blockchain. But just like any other kind of management, the quality of a blockchain’s governance depends on answering a string of important questions. For example: What data belongs on the blockchain, and what doesn’t? Who decides what goes on? What are the criteria for what is included? Who monitors? What’s the protocol if an error is found in the code of the blockchain? Who makes decisions about whether a structural change should be made to a blockchain? How are voting rights and power distributed?
Bad governance in blockchain can lead to nightmare scenarios, like people losing their savings, having information about themselves disclosed against their wills, or false information loaded onto people’s asset pages that enables deception and fraud.
Blockchain is most often associated with financial services, but every industry stands to integrate some kind of blockchain solution, each of which comes with particular pitfalls. For instance, we might use blockchain to store, access, and distribute information related to patient data, the inappropriate handling of which could lead to the ethical nightmare of widescale privacy violations. Things seem even more perilous when we recognize that there isn’t just one type of blockchain, and that there are different ways of governing a blockchain. And because the basic rules of a given blockchain are very hard to change, early decisions about what blockchain to develop and how to maintain it are extremely important.
These Are Business, Not (Only) Technology, Problems
Companies’ ability to adopt and use these technologies as they evolve will be essential to staying competitive. As such, leaders will have to ask and answer questions such as:
What constitutes an unfair or unjust or discriminatory distribution of goods and services?
Is using a black box model acceptable in this context?
Is the chatbot engaging in ethically unacceptable manipulation of users?
Is the governance of this blockchain fair, reasonable, and robust?
Is this augmented reality content appropriate for the intended audience?
Is this our organization’s responsibility or is it the user’s or the government’s?
Does this place an undue burden on users?
Is this inhumane?
Might this erode confidence in democracy when used or abused at scale?
Why does this responsibility fall to business leaders as opposed to, say, the technologists who are tasked with deploying the new tools and systems? After all, most leaders aren’t fluent in the coding and the math behind software that learns by example, the quantum physics behind quantum computers, and the cryptography that underlies blockchain. Shouldn’t the experts be in charge of weighty decisions like these?
The thing is, these aren’t technical questions — they’re ethical, qualitative ones. They are exactly the kinds of problems that business leaders — guided by relevant subject matter experts — are charged with answering. Off-loading that responsibility to coders, engineers, and IT departments is unfair to the people in those roles and unwise for the organization. It’s understandable that leaders might find this task daunting, but there’s no question that they’re the ones responsible.
The Ethical Nightmare Challenge
I’ve tried to convince you of three claims. First, that leaders and organizations need to explicitly identify their ethical nightmares springing from new technologies. Second, a significant source of risk lies in how these technologies work. And third, that it’s the job of senior executives to guide their respective organizations on ethics.
These claims fund a conclusion: Organizations that leverage digital technologies need to address ethical nightmares before they hurt people and brands. I call this the “ethical nightmare challenge.” To overcome it, companies need to create an enterprise-wide digital ethical risk program. The first part of the program — what I call the content side — asks: What are the ethical nightmares we’re trying to avoid, and what are their potential sources? The second part of the program — what I call the structure side — answers the question: How do we systematically and comprehensively ensure those nightmares don’t become a reality?
Content.
Ethical nightmares can be articulated with varying levels of detail and customization. Your ethical nightmares are partly informed by the industry you’re in, the kind of organization you are, and the kinds of relationships you need to have with your clients, customers, and other stakeholders for things to go well. For instance, if you’re a health care provider that has clinicians using ChatGPT or another LLM to make diagnoses and treatment recommendations, then your ethical nightmare might include widespread false recommendations that your people lack the training to spot. Or if your chatbot is undertrained on information related to particular races and ethnicities, and neither the developers of the chatbot nor the clinicians know this, then your ethical nightmare would be systematically giving false diagnoses and bad treatments to those who have already been discriminated against. If you’re a financial services company that uses blockchain to transact on behalf of clients, then one ethical nightmare might be the absence of an ability to correct mistakes in the code — a function of ill-defined governance of the blockchain. That could mean, for instance, being unable to call back fraudulent transfers.
Notice that articulating nightmares means naming details and consequences. The more specific you can get — which is a function of your knowledge of the technologies, your industry, your understanding of the various contexts in which your technologies will be deployed, your moral imagination, and your ability to think through the ethical implications of business operations — the easier it will be to build the appropriate structure to control for these things.
Structure.
While the methods for identifying the nightmares hold across organizations, the strategies for creating appropriate controls vary depending on the size of the organization, existing governance structures, risk appetites, management culture, and more. Companies’ overtures into this realm can be classified as either formal or informal. In an ideal world, every organization would take the formal approach. However, factors like limited time and resources, the rate at which a company (truly or falsely) believes it will be impacted by digital technologies, and business necessities in an unpredictable market sometimes make it reasonable to choose the informal approach. In those cases, the informal approach should be seen as a first step, and better than nothing at all.
The formal approach is systematic and comprehensive, and it takes a good deal of time and resources to build. In short, it centers around the ability to create and execute on an enterprise-wide digital ethical risk strategy. Broadly speaking, it involves four steps.
Education and alignment. First, all senior leaders need to understand the technologies enough that they can agree on what constitutes the ethical nightmares of the organization. Knowledge and the alignment of leaders are prerequisites for building and implementing a robust digital ethical risk strategy.
This education can be achieved by executive briefings, workshops, and seminars. But it should not require — or try to teach — math or coding. This process is for non-technologists and technologists alike to wrap their heads around what risks their company may face. Moreover, it must be about the ethical nightmares of the organization, not sustainability or ESG criteria or “company values.”
Gap and feasibility analyses.Before building a strategy, leaders need to know what their organization looks like and what the probability is of their nightmares actually happening. As such, the second step consists of performing gap and feasibility analyses of where your organization is now; how far away it is from sufficiently safeguarding itself from an ethical nightmare unfolding; and what it will take in terms of people, processes, and technology to close those gaps.
To do this, leaders must identify where their digital technologies are and where they will likely be designed or procured within their organization. Because if you don’t know how the technologies work, how they’re used, or where they’re headed, you’ll have no hope of avoiding the nightmares.
Then a variety of questions present themselves:
What policies are in place that address or fail to address your ethical nightmares?
What processes are in place to identify ethical nightmares? Do they need to be augmented? Are new processes required?
What level of awareness do employees have of these digital ethical risks? Are they capable of detecting signs of problems early? Does the culture make it safe for them to speak up about possible red flags?
When an alarm is sounded, who responds, and on what grounds do they decide how to move forward?
How do you operationalize and harmonize digital ethical risk assessment relative to existing enterprise-risk categories and operations?
The answers to questions like these will vary wildly across organizations. It’s one reason why digital ethical risk strategies are difficult to create and implement: They must be customized to integrate with existing governance structures, policies, processes, workflows, tools, and personnel. It’s easy to say “everyone needs a digital ethical risk board,” in the model of the institutional review boards that arose in medicine to mitigate the ethical risks around research on human subjects. But it’s not possible to continue with “and every one of them should look like this, act like this, and act with other groups in the business like this.” Here, good strategy does not come from a one-size-fits-all solution.
Strategy creation. The third step in the formal approach is building a corporate strategy in light of the gap and feasibility analyses. This includes, among other things, refining goals and objectives, deciding on an approach to metrics and KPIs (for measuring both compliance with the digital ethical risk program and its impact), designing a communications plan, and identifying key drivers of success for implementation.
Cross-functional involvement is needed. Leaders from technology, risk, compliance, general counsel, and cybersecurity should all be involved. Just as important, direction should come from the board and the CEO. Without their robust buy-in and encouragement, the program will get watered down.
Implementation. The fourth and final step is implementation of the strategy, which entails reconfiguring workflows, training, support, and ongoing monitoring, including quality assurance and quality improvement.
For example, new procedures should be customized by business domain or by roles to harmonize them with existing procedures and workflows. These procedures should clearly define the roles and responsibilities of different departments and individuals and establish clear processes for identifying, reporting, and addressing ethical issues. Additionally, novel workflows need to seek an optimal balance of human-computer interaction, which will depend on the kinds of tasks and the relative risks involved, and establish human oversight of automated flows.
The informal approach, by contrast, usually involves the following endeavors: providing education and alignment on ethical nightmares by leaders; entrusting executives in distinct units of the business (such as HR, marketing, product lines, or R&D) with identifying the processes needed to complete an ethical nightmare check; and creating or leveraging an existing (ethical) risk board to advise various personnel — either on individual projects or at a more institutional scale — when ethical risks are detected.