On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, DOD CIO John Sherman said.
The Pentagon’s chief information officer will take the reins of the department’s efforts to operationalize 5G communications technology for warfighters this fall, CIO John Sherman said Thursday.
For the last few years, the Office of the Undersecretary of Defense for Research and Engineering has been working to adopt 5G and future-generation wireless network technologies across the department. On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, Sherman announced at the DefenseTalks conference hosted by DefenseScoop..
“We’ve already been working left-seat and right-seat with research and engineering on this,” he said. “But we’ve got the lead as of Oct. 1 on the 5G pilots that are underway at the numerous DOD installations.”
In 2020, the Pentagon awarded contracts to multiple prime contractors to set up 5G and “FutureG” test bed projects at different military bases across the country. Each site experiments with a different way the department can utilize the technology, including creating smart warehouses enabled by 5G and bi-directional spectrum sharing.
The Pentagon’s chief information officer will take the reins of the department’s efforts to operationalize 5G communications technology for warfighters this fall, CIO John Sherman said Thursday.
For the last few years, the Office of the Undersecretary of Defense for Research and Engineering has been working to adopt 5G and future-generation wireless network technologies across the department. On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, Sherman announced at the DefenseTalks conference hosted by DefenseScoop..
“We’ve already been working left-seat and right-seat with research and engineering on this,” he said. “But we’ve got the lead as of Oct. 1 on the 5G pilots that are underway at the numerous DOD installations.”
In 2020, the Pentagon awarded contracts to multiple prime contractors to set up 5G and “FutureG” test bed projects at different military bases across the country. Each site experiments with a different way the department can utilize the technology, including creating smart warehouses enabled by 5G and bi-directional spectrum sharing.
But adopting O-RAN technology could be key for the Pentagon’s 5G and FutureG efforts, as it would allow the department to “break open that black box into different components,” he said.
Not only would the Pentagon’s focus on O-RAN bring new competition to the market and incentivize new innovation, the open architecture approach would also allow the department to experiment with new features in wireless communications, such as zero-trust security features, he noted.
As we set out to build an insurmountable S&T lead over ambitious competitors, the DoD will lean on the core principles outlined in the National Defense Science and Technology Strategy and three lines-of-efforts as the cornerstones of our work. The LOEs – a focus on the joint mission, creating and fielding capabilities at speed and scale, and ensuring the foundations for research and development – will guide the capabilities we explore, our strategic investments, and our science and technology research priorities.
A Focus on the Joint Mission
The National Defense Strategy emphasizes that future military operations must be joint, and directs us to deliver appropriate, asymmetrical capabilities to the Joint Force. Recognizing that we operate in a resource-constrained environment, the DoD will take a rigorous, analytical approach to our investments, and avoid getting mired in wasteful technology races with our competitors. Consistent joint experimentation will accelerate our capacity to convert the joint warfighting concept to capabilities.
Create and Field Capabilities at Speed and Scale
Technological innovations and cutting-edge capabilities are useless to the Joint Force when they languish in research labs. We cannot let the perfect be the enemy of the good, and we cannot let outdated processes prevent collaboration. To succeed, the DoD will need to pursue new and novel ways to bridge the valleys of death in defense innovation. We will foster a more vibrant defense innovation ecosystem, strengthening collaboration and communication with allies and non-traditional partners in industry and academia. We will operate with a sense of urgency and pursue the continuous transition of capabilities to our warfighters.
Ensuring the Foundations for Research and Development
To thrive in the era of competition, we must provide those at the forefront of S&T world-class with tools and facilities, and create an environment in which the DoD is the first choice for the world’s brightest scientific minds. By enhancing our physical and digital infrastructure, creating robust professional development programs for the workforce of today, while investing in pipelines for the workforce of tomorrow, we will ensure America is ready for the challenges of decades to come.
NIST Director Laurie Locascio discussed her agency’s plans before a House hearing, revealing major focuses on critical and emerging technologies.
Emerging technologies feature heavily in the FY2024 budget request from the National Institute of Standards and Technology,
Director Laurie Locascio discussed her agency’s budget plans before the House Committee on Science, Space and Technology Wednesday morning, covering a wide range of spending plans, including improving outdated research facilities, boosting domestic manufacturing efforts, investing in cybersecurity education and developing guidance for emerging tech systems.
A total of $995 million in science and technical funding was requested for the agency’s 2024 budget, including $68.7 million intended for new research programs. Among the emerging technologies NIST intends to focus on developing throughout the next year are artificial intelligence systems, quantum technologies and biotechnology.
“It is essential that we remain in a strong leadership position as these technologies are major drivers of economic growth,” Locascio testified.
Chief among the priority areas for AI will be developing standards and benchmarks to further aid in the technologies’ responsible development. Part of this will involve working with ally nations to foster a shared set of technical standards that promote common understanding of how AI systems should––and should not––be used. This process is crucial to keeping barriers in the international trade ecosystem low.
“The budget will provide new resources to expand the capabilities for benchmarking and evaluation of AI systems to ensure that the US can lead in AI innovation, while ensuring that we responsibly address the risks of this rapidly developing technology,” she said.
Increasing public involvement is another pillar in NIST’s strategy to help cultivate a roadmap surrounding trustworthy AI development. Locascio said that in 2024, NIST aims to apply the Artificial Intelligence Risk Management Framework’s provisions to gauge risks associated with generative AI systems like ChatGPT, as well as create a public working group to provide input specifically on the generative branch of AI technologies.
Locascio also talked about the agency’s plans for quantum information sciences. $220 million from the FY2024 proposed budget is intended for research specifically in the quantum technology field, focusing on fundamental measurement research and post-quantum cryptography development.
“We have a number of different activities that we’re doing but related to workforce development of the new cybersecurity framework and security for post quantum encryption algorithms,” she confirmed.
NIST also aims to work on developing similar standards for nascent biotechnologies. She highlighted gene editing as one of the standout topics within the biotechnology field, and discussed ongoing private sector engagement with NIST to guide the U.S. biotech sector on its product development, such as antibody-based treatments.
“Our goal is to make sure that when you’re editing the genome, you know what you did…and…you know how to anticipate the outcome,” Locascio said.
Emerging and critical technologies have steadily risen as a priority item within the Biden administration’s docket. Last week, the White House unveiled its inaugural National Standards Strategy––developed with the help of NIST––to promote its leadership in regulating how these technologies will be used in a domestic and global context.
“We’re really at a place where we need to be proactive in critical and emerging technologies, make sure that we are at the table promoting U.S. innovation and our competitive technologies and bring them to the table in the international standards forum, and represent leadership positions there as well,” Locascio said. “So NIST is really at the forefront of that.”
Next-generation technologies are poised to cause society-shaking shifts at unprecedented speed and scale. Generative AI, quantum computing, blockchain, and other technologies present novel ethical problems that “business as usual” just can’t handle. To meet these challenges, leaders need to do something different: They must talk about ethics in direct, clear terms, and they must not only define their ethical nightmares but also explain how they’re going to prevent them. To prepare for the ethical challenges ahead, companies need to ensure their senior leaders understand these technologies and are aligned on the ethical risks, perform a gap and feasibility analysis, build a strategy, and implement it. All of this requires an important shift from thinking of our digital ethical nightmares as a technology problem to a leadership problem.
Facebook, which was created in 2004, amassed 100 million users in just four and a half years. The speed and scale of its growth was unprecedented. Before anyone had a chance to understand the problems the social media network could cause, it had grown into an entrenched behemoth.
In 2015, the platform’s role in violating citizens’ privacy and its potential for political manipulation was exposed by the Cambridge Analytica scandal. Around the same time, in Myanmar, the social network amplified disinformation and calls for violence against the Rohingya, an ethnic minority in the country, which culminated in a genocide that began in 2016. In 2021, the Wall Street Journal reported that Instagram, which had been acquired by Facebook in 2012, had conducted research showing that the app was toxic to the mental health of teenage girls.
Defenders of Facebook say that these impacts were unintended and unforeseeable. Critics claim that, instead of moving fast and breaking things, social media companies should have proactively avoided ethical catastrophe. But both sides agree that new technologies can give rise to ethical nightmares, and that should make business leaders — and society — very, very nervous.
We are at the beginning of another technological revolution, this time with generative AI — models that can produce text, images, and more. It took just two months for OpenAI’s ChatGPT to pass 100 million users. Within six months of its launch, Microsoft released ChatGPT-powered Bing; Google demoed its latest large language model (LLM), Bard; and Meta released LLaMA. ChatGPT-5 will likely be here before we know it. And unlike social media, which remains largely centralized, this technology is already in the hands of thousands of people. Researchers at Stanford recreated ChatGPT for about $600 and made their model, called Alpaca, open-source. By early April, more than 2,400 people had made their own versions of it.
While generative AI has our attention right now, other technologies coming down the pike promise to be just as disruptive. Quantum computing will make today’s data crunching look like kindergarteners counting on their fingers. Blockchain technologies are being developed well beyond the narrow application of cryptocurrency. Augmented and virtual reality, robotics, gene editing, and too many others to discuss in detail also have the potential to reshape the world for good or ill.
If precedent serves, the companies ushering these technologies into the world will take a “let’s just see how this goes” approach. History also suggests this will be bad for the unsuspecting test subjects: the general public. It’s hard not to worry that, alongside the benefits they’ll offer, the leaps in technology will come with a raft of societal-level harm that we’ll spend the next 20-plus years trying to undo.
It’s time for a new approach. Companies that develop these technologies need to ask: “How do we develop, apply, and monitor them in ways that avoid worst-case scenarios?” And companies that procure these technologies and, in some cases, customize them (as businesses are doing now with ChatGPT) face an equally daunting challenge: “How do we design and deploy them in a way that keeps people (and our brand) safe?”
In this article, I will try to convince you of three things: First, that businesses need to explicitly identify the risks posed by these new technologies as ethical risks or, better still, as potential ethical nightmares. Ethical nightmares aren’t subjective. Systemic violations of privacy, the spread of democracy-undermining misinformation, and serving inappropriate content to children are on everyone’s “that’s terrible” list. I don’t care which end of the political spectrum your company falls on — if you’re Patagonia or Hobby Lobby — these are our ethical nightmares.
Second, that by virtue of how these technologies work — what makes them tick — the likelihood of realizing ethical and reputational risks has massively increased.
Third, that business leaders are ultimately responsible for this work, not technologists, data scientists, engineers, coders, or mathematicians. Senior executives are the ones who determine what gets created, how it gets created, and how carefully or recklessly it is deployed and monitored.
These technologies introduce daunting possibilities, but the challenge of facing them isn’t that complicated: Leaders need to articulate their worst-case scenarios — their ethical nightmares — and explain how they will prevent them. The first step is to get comfortable talking about ethics.
Business Leaders Can’t Be Afraid to Say “Ethics”
After 20 years in academia, 10 of them spent researching, teaching, and publishing on ethics, I attended my first nonacademic conference in 2018. It was sponsored by a Fortune 50 financial services company, and the theme was “sustainability.” Having taught courses on environmental ethics, I thought it would be interesting to see how corporations think about their responsibilities vis-à-vis their environmental impacts. When I got there, I found presentations on educating women around the globe, lifting people out of poverty, and contributing to the mental and physical health of all. Few were talking about the environment.
It took me an embarrassingly long time to figure out that in the corporate and nonprofit worlds, “sustainability” doesn’t mean “practices that don’t destroy the environment for future generations.” Instead it means “practices in pursuit of ethical goals” and an assertion that those practices promote the bottom line. As for why businesses didn’t simply say “ethics,” I couldn’t understand.
This behavior — of replacing the word “ethics” with some other, less precise term — is widespread. There’s Environmental, Social, and Governance (ESG) investing, which boils down to investing in companies that avoid ethical risks (emissions, diversity, political actions, and the like) on the theory that those practices protect profits. Some companies claim to be “values driven,” “mission driven,” or “purpose driven,” but these monikers rarely have anything to do with ethics. “Customer obsessed” and “innovation” aren’t ethical values; a purpose or mission can be completely amoral (putting immoral to the side). So-called “stakeholder capitalism” is capitalism tempered by a vague commitment to the welfare of unidentified stakeholders (as though stakeholder interests do not conflict). Finally, the world of AI ethics has grown tremendously over the last five years or so. Corporations heard the call, “We want AI ethics!” Their distorted response is, “Yes, we, too, are for responsible AI!”
Ethical challenges don’t disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively. Does sustainability advise against using personal data for the purposes of targeted marketing? When does using a black box model violate ESG criteria? What happens if your mission of connecting people also happens to connect white nationalists?
Let’s focus on the move from “AI ethics” to “responsible AI” as a case study on the problematic impacts of shifting language. First, when business leaders talk about “responsible” and “trustworthy” AI, they focus on a broad set of issues that include cybersecurity, regulation, legal concerns, and technical or engineering risks. These are important, but the end result is that technologists, general counsels, risk officers, and cybersecurity engineers focus on areas they are already experts on, which is to say, everything exceptethics.
Second, when it comes to ethics, leaders get stuck at very high-level and abstract principles or values — on concepts such as fairness and respect for autonomy. Since this is only a small part of the overall “responsible AI” picture, companies often fail to drill down into the very real, concrete ways these questions play out in their products. Ethical nightmares that outstrip outdated regulations and laws are left unidentified, and just as probable as they were before a “responsible AI” framework is deployed.
Third, the focus on identifying and pursuing “responsible AI” gives companies a vague goal with vague milestones. AI statements from organizations say things like, “We are for transparency, explainability, and equity.” But no company is transparent about everything with everyone (nor should it be); not every AI model needs to be explainable; and what counts as equitable is highly contentious. No wonder, then, that the companies that “commit” to these values quickly abandon them. There are no goals here. No milestones. No requirements. And there’s no articulation of what failure looks like.
But when AI ethics fail, the results are specific. Ethical nightmares are vivid: “We discriminated against tens of thousands of people.” “We tricked people into giving up all that money.” “We systematically engaged in violating people’s privacy.” In short, if you know what your ethical nightmares are then you know what ethical failure looks like.
Where Digital Nightmares Come From
Understanding how emerging technologies work — what makes them tick — will help explain why the likelihood of realizing ethical and reputational risks has massively increased. I’ll focus on three of the most important ones.
Artificial intelligence.
Let’s start with a technology that has taken over the headlines: artificial intelligence, or AI. The vast majority of AI out there is machine learning (ML).
“Machine learning” is, at its simplest, software that learns by example. And just as people learn to discriminate on the basis of race, gender, ethnicity, or other protected attributes by following examples around them, software does, too.
Say you want to train your photo recognition software to recognize pictures of your dog, Zeb. You give that software lots of examples and tell it, “That’s Zeb.” The software “learns” from those examples, and when you take a new picture of your dog, it recognizes it as a picture of Zeb and labels the photo “Zeb.” If it’s not a photo of Zeb, it will label the file “not Zeb.” The process is the same if you give your software examples of what “interview-worthy” résumés look like. It will learn from those examples and label new résumés as being “interview-worthy” or “not interview-worthy.” The same goes for applications to university, or for a mortgage, or for parole.
In each case, the software is recognizing and replicating patterns. The problem is that sometimes those patterns are ethically objectionable. For instance, if the examples of “interview-worthy” résumés reflect historical or contemporary biases against certain races, ethnicities, or genders, then the software will pick up on it. Amazon once built a résumé-screening AI. And to determine parole, the U.S. criminal justice system has used prediction algorithms that replicated historical biases against Black defendants.
It’s crucial to note that the discriminatory pattern can be identified and replicated independently of the intentions of the data scientists and engineers programming the software. In fact, data scientists at Amazon identified the problem with their AI mentioned above and tried to fix it, but they couldn’t. Amazon decided, rightly, to scrap the project. But had it been deployed, an unwitting hiring manager would have used a tool with ethically discriminatory operations, regardless of that person’s intentions or the organization’s stated values.
Discriminatory impacts are just one ethical nightmare to avoid with AI. There are also privacy concerns, the danger of AI models (especially large language models like ChatGPT) being used to manipulate people, the environmental cost of the massive computing power required, and countless other use-case-specific risks.
Quantum computing.
The details of quantum computers are exceedingly complicated, but for our purposes, we need to know only that they are computers that can process a tremendous amount of data. They can perform calculations in minutes or even seconds that would take today’s best supercomputers thousands of years. Companies like IBM and Google are pouring billions of dollars into this hardware revolution, and we’re poised to see increased quantum computer integration into classical computer operations every year.
Quantum computers throw gasoline on a problem we see in machine learning: the problem of unexplainable, or black box, AI. Essentially, in many cases, we don’t know why an AI tool makes the predictions that it does. When the photo software looks at all those pictures of Zeb, it’s analyzing those pictures at the pixel level. More specifically, it’s identifying all those pixels and the thousands of mathematical relations among those pixels that constitute “the Zeb pattern.” Those mathematical Zeb patterns are phenomenally complex — too complex for mere mortals to understand — which means that we don’t understand why it (correctly or incorrectly) labeled this new photo “Zeb.” And while we might not care about getting explanations in the case of Zeb, if the software says to deny someone an interview (or a mortgage, or insurance, or admittance) then we might care quite a bit.
Quantum computing makes black box models truly impenetrable. Right now, data scientists can offer explanations of an AI’s outputs that are simplified representations of what’s actually going on. But at some point, simplification becomes distortion. And because quantum computers can process trillions of data points, boiling that process down to an explanation we can understand — while retaining confidence that the explanation is more or less true — becomes vanishingly difficult.
That leads to a litany of ethical questions: Under what conditions can we trust the outputs of a (quantum) black box model? What are the appropriate benchmarks for performance? What do we do if the system appears to be broken or is acting very strangely? Do we acquiesce to the inscrutable outputs of the machine that has proven reliable previously? Or do we eschew those outputs in favor of our comparatively limited but intelligible human reasoning?
Blockchain.
Suppose you and I and a few thousand of our friends each have a magical notebook with the following features: When someone writes on a page, that writing simultaneously appears in everyone else’s notebook. Nothing written on a page can ever be erased. The information on the pages and the order of the pages is immutable; no one can remove or rearrange the pages. A private, passphrase-protected page lists your assets — money, art, land titles — and when you transfer an asset to someone, both your page and theirs are simultaneously and automatically updated.
At a very high level, this is how blockchain works. Each blockchain follows a specific set of rules that are written into its code, and changes to those rules are decided by whomever runs the blockchain. But just like any other kind of management, the quality of a blockchain’s governance depends on answering a string of important questions. For example: What data belongs on the blockchain, and what doesn’t? Who decides what goes on? What are the criteria for what is included? Who monitors? What’s the protocol if an error is found in the code of the blockchain? Who makes decisions about whether a structural change should be made to a blockchain? How are voting rights and power distributed?
Bad governance in blockchain can lead to nightmare scenarios, like people losing their savings, having information about themselves disclosed against their wills, or false information loaded onto people’s asset pages that enables deception and fraud.
Blockchain is most often associated with financial services, but every industry stands to integrate some kind of blockchain solution, each of which comes with particular pitfalls. For instance, we might use blockchain to store, access, and distribute information related to patient data, the inappropriate handling of which could lead to the ethical nightmare of widescale privacy violations. Things seem even more perilous when we recognize that there isn’t just one type of blockchain, and that there are different ways of governing a blockchain. And because the basic rules of a given blockchain are very hard to change, early decisions about what blockchain to develop and how to maintain it are extremely important.
These Are Business, Not (Only) Technology, Problems
Companies’ ability to adopt and use these technologies as they evolve will be essential to staying competitive. As such, leaders will have to ask and answer questions such as:
What constitutes an unfair or unjust or discriminatory distribution of goods and services?
Is using a black box model acceptable in this context?
Is the chatbot engaging in ethically unacceptable manipulation of users?
Is the governance of this blockchain fair, reasonable, and robust?
Is this augmented reality content appropriate for the intended audience?
Is this our organization’s responsibility or is it the user’s or the government’s?
Does this place an undue burden on users?
Is this inhumane?
Might this erode confidence in democracy when used or abused at scale?
Why does this responsibility fall to business leaders as opposed to, say, the technologists who are tasked with deploying the new tools and systems? After all, most leaders aren’t fluent in the coding and the math behind software that learns by example, the quantum physics behind quantum computers, and the cryptography that underlies blockchain. Shouldn’t the experts be in charge of weighty decisions like these?
The thing is, these aren’t technical questions — they’re ethical, qualitative ones. They are exactly the kinds of problems that business leaders — guided by relevant subject matter experts — are charged with answering. Off-loading that responsibility to coders, engineers, and IT departments is unfair to the people in those roles and unwise for the organization. It’s understandable that leaders might find this task daunting, but there’s no question that they’re the ones responsible.
The Ethical Nightmare Challenge
I’ve tried to convince you of three claims. First, that leaders and organizations need to explicitly identify their ethical nightmares springing from new technologies. Second, a significant source of risk lies in how these technologies work. And third, that it’s the job of senior executives to guide their respective organizations on ethics.
These claims fund a conclusion: Organizations that leverage digital technologies need to address ethical nightmares before they hurt people and brands. I call this the “ethical nightmare challenge.” To overcome it, companies need to create an enterprise-wide digital ethical risk program. The first part of the program — what I call the content side — asks: What are the ethical nightmares we’re trying to avoid, and what are their potential sources? The second part of the program — what I call the structure side — answers the question: How do we systematically and comprehensively ensure those nightmares don’t become a reality?
Content.
Ethical nightmares can be articulated with varying levels of detail and customization. Your ethical nightmares are partly informed by the industry you’re in, the kind of organization you are, and the kinds of relationships you need to have with your clients, customers, and other stakeholders for things to go well. For instance, if you’re a health care provider that has clinicians using ChatGPT or another LLM to make diagnoses and treatment recommendations, then your ethical nightmare might include widespread false recommendations that your people lack the training to spot. Or if your chatbot is undertrained on information related to particular races and ethnicities, and neither the developers of the chatbot nor the clinicians know this, then your ethical nightmare would be systematically giving false diagnoses and bad treatments to those who have already been discriminated against. If you’re a financial services company that uses blockchain to transact on behalf of clients, then one ethical nightmare might be the absence of an ability to correct mistakes in the code — a function of ill-defined governance of the blockchain. That could mean, for instance, being unable to call back fraudulent transfers.
Notice that articulating nightmares means naming details and consequences. The more specific you can get — which is a function of your knowledge of the technologies, your industry, your understanding of the various contexts in which your technologies will be deployed, your moral imagination, and your ability to think through the ethical implications of business operations — the easier it will be to build the appropriate structure to control for these things.
Structure.
While the methods for identifying the nightmares hold across organizations, the strategies for creating appropriate controls vary depending on the size of the organization, existing governance structures, risk appetites, management culture, and more. Companies’ overtures into this realm can be classified as either formal or informal. In an ideal world, every organization would take the formal approach. However, factors like limited time and resources, the rate at which a company (truly or falsely) believes it will be impacted by digital technologies, and business necessities in an unpredictable market sometimes make it reasonable to choose the informal approach. In those cases, the informal approach should be seen as a first step, and better than nothing at all.
The formal approach is systematic and comprehensive, and it takes a good deal of time and resources to build. In short, it centers around the ability to create and execute on an enterprise-wide digital ethical risk strategy. Broadly speaking, it involves four steps.
Education and alignment. First, all senior leaders need to understand the technologies enough that they can agree on what constitutes the ethical nightmares of the organization. Knowledge and the alignment of leaders are prerequisites for building and implementing a robust digital ethical risk strategy.
This education can be achieved by executive briefings, workshops, and seminars. But it should not require — or try to teach — math or coding. This process is for non-technologists and technologists alike to wrap their heads around what risks their company may face. Moreover, it must be about the ethical nightmares of the organization, not sustainability or ESG criteria or “company values.”
Gap and feasibility analyses.Before building a strategy, leaders need to know what their organization looks like and what the probability is of their nightmares actually happening. As such, the second step consists of performing gap and feasibility analyses of where your organization is now; how far away it is from sufficiently safeguarding itself from an ethical nightmare unfolding; and what it will take in terms of people, processes, and technology to close those gaps.
To do this, leaders must identify where their digital technologies are and where they will likely be designed or procured within their organization. Because if you don’t know how the technologies work, how they’re used, or where they’re headed, you’ll have no hope of avoiding the nightmares.
Then a variety of questions present themselves:
What policies are in place that address or fail to address your ethical nightmares?
What processes are in place to identify ethical nightmares? Do they need to be augmented? Are new processes required?
What level of awareness do employees have of these digital ethical risks? Are they capable of detecting signs of problems early? Does the culture make it safe for them to speak up about possible red flags?
When an alarm is sounded, who responds, and on what grounds do they decide how to move forward?
How do you operationalize and harmonize digital ethical risk assessment relative to existing enterprise-risk categories and operations?
The answers to questions like these will vary wildly across organizations. It’s one reason why digital ethical risk strategies are difficult to create and implement: They must be customized to integrate with existing governance structures, policies, processes, workflows, tools, and personnel. It’s easy to say “everyone needs a digital ethical risk board,” in the model of the institutional review boards that arose in medicine to mitigate the ethical risks around research on human subjects. But it’s not possible to continue with “and every one of them should look like this, act like this, and act with other groups in the business like this.” Here, good strategy does not come from a one-size-fits-all solution.
Strategy creation. The third step in the formal approach is building a corporate strategy in light of the gap and feasibility analyses. This includes, among other things, refining goals and objectives, deciding on an approach to metrics and KPIs (for measuring both compliance with the digital ethical risk program and its impact), designing a communications plan, and identifying key drivers of success for implementation.
Cross-functional involvement is needed. Leaders from technology, risk, compliance, general counsel, and cybersecurity should all be involved. Just as important, direction should come from the board and the CEO. Without their robust buy-in and encouragement, the program will get watered down.
Implementation. The fourth and final step is implementation of the strategy, which entails reconfiguring workflows, training, support, and ongoing monitoring, including quality assurance and quality improvement.
For example, new procedures should be customized by business domain or by roles to harmonize them with existing procedures and workflows. These procedures should clearly define the roles and responsibilities of different departments and individuals and establish clear processes for identifying, reporting, and addressing ethical issues. Additionally, novel workflows need to seek an optimal balance of human-computer interaction, which will depend on the kinds of tasks and the relative risks involved, and establish human oversight of automated flows.
The informal approach, by contrast, usually involves the following endeavors: providing education and alignment on ethical nightmares by leaders; entrusting executives in distinct units of the business (such as HR, marketing, product lines, or R&D) with identifying the processes needed to complete an ethical nightmare check; and creating or leveraging an existing (ethical) risk board to advise various personnel — either on individual projects or at a more institutional scale — when ethical risks are detected.
Organizational transformations are difficult on a personal level for everyone involved. A team of researchers found that in successful transformations, leaders not only made sure their teams had the processes, resources, and technology they needed — they also built the right emotional conditions. These leaders offered a compelling rationale driving the transformation, and they ensured employees had the emotional support they needed to execute. This meant that when the going inevitably got tough, employees felt appropriately challenged and ultimately energized by the stress. By contrast, leaders of the unsuccessful transformations didn’t make the same emotional investment. When their teams hit the inevitable challenges, negative emotions spiked, and the team entered a downward spiral. Leaders lost faith and looked to distance themselves from the project, which led employees to do the same. The researchers identified six behaviors that consistently improved the odds of transformation success.
Disruption used to be an exceptional event that hit an unlucky few companies — think of the likes of Kodak, Polaroid, and Blackberry. But in today’s complex and uncertain world, as we face challenges ranging from climate change to digitization, geopolitics to DEI, organizations must treat transformation as a core capability to master, as opposed to a one-off event.
At the same time, leaders must recognize that transformation is fraught with risk. In 1995, John Kotter found that 70% of organizational transformations fail, and nearly three decades later, not much has changed. Our own research, in which we spoke to more than 900 C-suite managers and more than 1,100 employees who had gone through a corporate transformation, showed similar results: 67% of leaders told us they had experienced at least one underperforming transformation in the last five years.
Considering that organizations will spend billions on transformation initiatives over the next year, a 70% failure rate equates to a significant erosion of value. So, what can leaders do to tilt the odds of success in their favor? To find out, we interviewed 30 leaders of transformations and surveyed more than 2,000 senior leaders and employees in 23 countries and 16 sectors. Half of our respondents had been involved in a successful transformation, while the other half had experienced an unsuccessful transformation.
So what tactics did the leaders of successful transformations use to manage the emotional journey? To find out, we built a model to predict the likelihood that an organization will achieve its transformation KPIs based on the extent to which it exhibited 50 behaviors across 11 areas of the transformation. This model revealed that behaviors in six of these areas consistently improved the odds of transformation success. Organizations that are above average in these areas have a 73% chance of meeting or exceeding their transformation KPIs, compared to only a 28% chance for organizations that are below average. Our research suggests that any organization that can effectively implement these six levers will maximize their chances of success.
Our research also found that a key difference in successful transformations was that leaders embraced their employees’ emotional journey. Fifty-two percent of respondents involved in successful transformations said their organization provided the emotional support they needed during the transformation process “to a significant extent” (as opposed to 27% of respondents who were involved in unsuccessful transformations).
Transformations are extremely difficult on a personal level for everyone involved. In the successes we studied, leaders not only made sure their teams had the processes, resources, and technology they needed — they also built the right emotional conditions. These leaders offered a compelling rationale driving the transformation, and they ensured employees had the emotional support they needed to execute. This meant that when the going inevitably got tough, employees felt appropriately challenged and ultimately energized by the stress.
By contrast, leaders of the unsuccessful transformations didn’t make the same emotional investment. When their teams hit the inevitable challenges, negative emotions spiked, and the team entered a downward spiral. Leaders lost faith and looked to distance themselves from the project, which led employees to do the same.
The Six Key Levers of Transformations
So what tactics did the leaders of successful transformations use to manage the emotional journey? The six levers that maximize the chances of success, according to our research are:
1. Leadership’s own willingness to change
Many people believe that a leader’s job is to look outward and give others guidance, but our research suggests that to help their workforce navigate a transformation, leaders need to look inward first and examine their own relationship with change. “If you are not ready to change yourself, forget about changing your team and your organization,” as Dr. Patrick Liew, executive chairman at GEX Ventures, told us.
In our interviews, leaders spoke of working on their own development, including engaging more with their emotions and becoming accustomed to the discomfort that accompanies personal growth. Leaders needed to “look into a mirror,” as one told us, and realize that they were part of the problem before the shift to a positive trajectory could take place. They needed to remove their own fear before they could help their employees get through this change.
“As someone who was tasked to lead this [transformation], if I’m being honest with you, it was pretty unsettling at the start, because I think by nature most of us like to know the path we’re going on,” as one COO from the automative industry told us. And a senior vice president in the global business services industry described needing to become more vulnerable and honest on their path to self-discovery: “I think I became even more aware of myself, who I am.”
2. A shared vision of success
Creating a unified vision of future success is another all-important foundation point of a transformation. In our research, 50% of respondents involved in successful transformations said the vision energized and inspired them to go the extra mile to a significant extent (as compared 29% of respondents in low-performing transformations).
Employees must understand the urgent need to disrupt the status quo. A compelling “why” can help them navigate the inevitable challenges that will arise during a transformation program. Many of the workers who took our survey said that they “wanted” and “needed” the vision to be communicated clearly. When leaders share a clear vision, the workforce is more likely to get on board. But if people don’t understand the vision or need for transformation, success is hard to achieve.
“It’s not about me telling people ‘This is what’s going to happen,’” as a managing director in the medical device industry told us. “It’s about me creating this shared sense of ownership…and then [coaching] my team on what they need to achieve. We very consciously want our teams to really buy into this is how we, as a collective, want to work.”
3. A culture of trust and psychological safety
Trust and care from leaders can make a difficult transformation more emotionally manageable. At the most basic human level, we all know what it feels like to be seen, listened to, and heard by another person. It can validate our effort, motivate us to work harder, and help assuage emotions like doubt, fear, anger, and sadness. Workers in our study shared that they wanted leaders who were patient and who also had, in the words of one employee, a “calm and teachable spirit.”
In a workplace with a high degree of psychological safety, employees feel confident that they can share their honest opinions and concerns without fear of retribution. When trust and psychological safety are missing, it’s difficult to persuade your workforce to make necessary changes. For example, one senior leader told us that employees at their company were extremely fearful of the transformation and didn’t feel that they could speak up about the problems they saw. Not surprisingly, the transformation did not go well.
4. A process that balances execution and exploration
Transformations obviously need disciplined project management to drive the program forward. But our research showed that leaders of successful transformations created processes that balanced the need to execute with giving employees the freedom to explore, express creativity, and let new ideas emerge. This empowers the workforce to identify solutions or opportunities that better meet the long-term goals of the transformation.
“Innovation requires the right people and processes,” said one respondent to our anonymous survey. “Both are critical to encourage collaboration and experimentation.”
We also found that creating space for small failures can ultimately lead to big success, whereas fear of any failure can lead to missed opportunities. Forty-eight percent of our respondents involved in successful transformations said the process was designed so that failed experimentation would not negatively impact their career or compensation to a significant extent. By contrast, only 29% of respondents in unsuccessful transformations said the same.
5. A recognition that technology carries its own emotional journey
The leaders in our study ranked technology as the biggest challenge they faced in their transformation efforts. There are a lot of emotions to manage when new systems or technology are introduced, from stress over how it works to fear about whether it will cause job loss or slow down the system.
In the underperforming transformations we studied, we saw the narrative shift away from the vision to focus on the technology itself. Whereas in the successful transformations, leaders ensured that technology was seen as the means to achieve the strategic vision. Furthermore, they prioritized quick implementations of new technology — focusing on a minimum viable product rather than perfect implementation. Lastly, they invested resources into skill development to ensure the workforce was ready to create value using the new technology.
“There were kickoff sessions with our senior managers to bring them in at the beginning of the process,” a vice president of a company in the media/advertising industry explained. “These sessions aimed to show them that what was being built was something that they had helped design, rather than something that was presented to them as a fait accompli…This minimized the numbers of active detractors.”
6. A shared sense of ownership over the outcome
In the successful transformations we studied, leaders and employees worked together to co-create an environment where everyone felt a shared sense of ownership over the transformation vision and outcome.
A prime example of this is many companies’ rapid shift to virtual and remote working during the pandemic. Because of the speed and urgency of the change, leaders needed to collaborate closely with the workforce to create new ways of working and be much more responsive to their views on what was or wasn’t going well. This mass co-creation helped build a sense of pride and shared ownership across both leadership and the workforce.
“In a transformation, things pop up all the time,” as Christiane Wijsen, head of corporate strategy at Boehringer Ingelheim, told us. “When you have a movement around you, supporters will buffer it and tweak it each time. When you don’t have this movement, then you’re alone.”
. . .
To conclude, it’s worth reiterating that all transformations are tough. Even during successful programs, there will come a time where people start to feel stressed. The skill at this difficult stage is being able to energize your workforce and turn that heightened pressure into something productive, as opposed to letting the transformation spiral downward into pessimism and underperformance.
What we saw throughout our research is that leaders who are truly working with their employees are much more successful. They acknowledge and manage emotions, rather than pushing them aside or ignoring them. The best leaders create vision across the organization and a safe environment to work together and listen to each other.
“You’ve got to be very, very respectful of people at a working level,” as Thomas Sebastian, CEO of London Market Joint Venture at DXC Technology, told us. “You’ve got to understand the emotional side and consider a completely different perspective, such as how is this transformation going to make their life easier.”
Success begets success. Once a workforce has undergone a successful transformation, they will be ready to go again. And given the pace of change in the world, organizations have got to be ready to go again.
Andrew White is a senior fellow in management practice at Saïd Business School, University of Oxford, where he directs the advanced management and leadership program and conducts research into leadership and transformation. He is also a coach for CEOs and their senior teams.
MWMichael Wheelock leads a primary research and advanced analytics team in EY Knowledge. His team designs and delivers global, mixed methods research programs to support EY’s flagship thought leadership.
ACAdam Canwell is head of EY’s global leadership consulting practice. Adam has published extensively on leadership and strategic change. Adam has sold and delivered transformation programs across multiple industries in both the UK and Australia, working with FTSE 100 (or their equivalent) organizations.
Michael Smets is a professor of management at Saïd Business School, University of Oxford. His work focuses on leadership, transformation, and institutional change.
OpenAI competitor Anthropic says its Claude chatbot has a built-in “constitution” that can instill ethical principles and keep systems from going rogue.
IT’S EASY TO freak out about more advanced artificial intelligence—and much more difficult to know what to do about it. Anthropic, a startup founded in 2021 by a group of researchers who left OpenAI, says it has a plan.
Anthropic is working on AI models similar to the one used to power OpenAI’s ChatGPT. But the startup announced today that its own chatbot, Claude, has a set of ethical principles built in that define what it should consider right and wrong, which Anthropic calls the bot’s “constitution.”
Jared Kaplan, a cofounder of Anthropic, says the design feature shows how the company is trying to find practical engineering solutions to sometimes fuzzy concerns about the downsides of more powerful AI. “We’re very concerned, but we also try to remain pragmatic,” he says.
Anthropic’s approach doesn’t instill an AI with hard rules it cannot break. But Kaplan says it is a more effective way to make a system like a chatbot less likely to produce toxic or unwanted output. He also says it is a small but meaningful step toward building smarter AI programs that are less likely to turn against their creators.
The notion of rogue AI systems is best known from science fiction, but a growing number of experts, including Geoffrey Hinton, a pioneer of machine learning, have argued that we need to start thinking now about how to ensure increasingly clever algorithms do not also become increasingly dangerous.
The principles that Anthropic has given Claude consist of guidelines drawn from the United Nations Universal Declaration of Human Rightsand suggested by other AI companies, including Google DeepMind. More surprisingly, the constitution includes principles adapted from Apple’s rules for app developers, which bar “content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy,” among other things.
The constitution includes rules for the chatbot, including “choose the response that most supports and encourages freedom, equality, and a sense of brotherhood”; “choose the response that is most supportive and encouraging of life, liberty, and personal security”; and “choose the response that is most respectful of the right to freedom of thought, conscience, opinion, expression, assembly, and religion.”
Anthropic’s approach comes just as startling progress in AIdelivers impressively fluent chatbots with significant flaws. ChatGPT and systems like it generate impressive answers that reflect more rapid progress than expected. But these chatbots also frequently fabricate information, and can replicate toxic languagefrom the billions of words used to create them, many of which are scraped from the internet.
One trick that made OpenAI’s ChatGPT better at answering questions, and which has been adopted by others, involves having humans grade the quality of a language model’s responses. That data can be used to tune the model to provide answers that feel more satisfying, in a process known as “reinforcement learning with human feedback” (RLHF). But although the technique helps make ChatGPT and other systems more predictable, it requires humans to go through thousands of toxic or unsuitable responses. It also functions indirectly, without providing a way to specify the exact values a system should reflect.
nthropic’s new constitutional approach operates over two phases. In the first, the model is given a set of principles and examples of answers that do and do not adhere to them. In the second, another AI model is used to generate more responses that adhere to the constitution, and this is used to train the model instead of human feedback.
“The model trains itself by basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic,” Kaplan says.
“It’s a great idea that seemingly led to a good empirical result for Anthropic,” says Yejin Choi, a professor at the University of Washington who led a previous experiment that involved a large language model giving ethical advice.
Choi says that the approach will work only for companies with large models and plenty of compute power. She adds that it is also important to explore other approaches, including greater transparency around training data and the values that models are given. “We desperately need to involve people in the broader community to develop such constitutions or datasets of norms and values,” she says.
Thomas Dietterich, a professor at Oregon State University who is researching ways of making AI more robust, says Anthropic’s approach looks like a step in the right direction. “They can scale feedback-based training much more cheaply and without requiring people—data labelers—to expose themselves to thousands of hours of toxic material,” he says
Dietterich adds it is especially important that the rules Claude adheres to can be inspected by those working on the system as well as outsiders, unlike the instructions that humans give a model through RLHF. But he says that the method does not completely eradicate errant behavior. Anthropic’s model is less likely to come out with toxic or morally problematic answers, but it is not perfect.
The idea of giving AI a set of rules to follow might seem familiar, having been put forward by Isaac Asimov in a series of science fiction stories that proposed Three Laws of Robotics. Asimov’s stories typically centered on the fact that the real world often presented situations that created a conflict between individual rules.
Kaplan of Anthropic says that modern AI is actually quite good at handling this kind of ambiguity. “The strange thing about contemporary AI with deep learning is that it’s kind of the opposite of the sort of 1950s picture of robots, where these systems are, in some ways, very good at intuition and free association,” he says. “If anything, they’re weaker on rigid reasoning.”
Anthropic says other companies and organizations will be able to give language models a constitution based on a research paper that outlines its approach. The company says it plans to build on the method with the goal of ensuring that even as AI gets smarter, it does not go rogue.
Updated 5-9-2023, 3:20 pm EDT: Thomas Dietterich is at Oregon State University, not the University of Oregon.
The tools, which launched earlier this spring, are intended to help agencies find businesses that are new to the federal marketplace and track equity goals.
The White House and General Services Administration on Monday announced two platforms for federal agencies to improve equity in procurement through a new government-wide procurement equity tool and a supplier base dashboard.
The tools, which launched earlier this spring, are intended to help agencies find businesses that are new to the federal marketplace, identify qualified vendors, and track agency progress toward equity in procurement goals.
They are intended to help achieve the Biden Administration’s federal contract spend goal for small disadvantaged businesses that has been increased to 15% by 2025 while the Office of Management and Budget (OMB) set a target that 12% of contracting dollars in fiscal 2023 go to small disadvantaged businesses.
“These two tools are going to help agencies make more connections with the diverse array of businesses offering their products in the federal marketplace,” said GSA Administrator Robin Carnahan. “By providing our federal partners with more information when they make procurement decisions, we’re better able to set ourselves up to achieve our contracting goals and create more equity in the marketplace for everyone.”
The tools, some of which will require government accounts, will support achieving equity goal by improving access to procurement opportunities for Small Disadvantaged Businesses (SDBs), Women-Owned Small Businesses (WOSBs), Service-Disabled Veteran-Owned Small Businesses (SDVOSBs), and Historically Underutilized Business Zone (HUBZone) Small Businesses.
“We’re committed to helping the acquisition workforce strengthen stewardship and efficiency in the federal procurement process while simultaneously advancing equity,” said OMB’s Associate Administrator of the Office of Federal Procurement Policy Mathew Blum. “We can maximize the power of procurement as a catalyst to help address our nation’s top priorities.”
The Government-wide Procurement Equity Tool uses dynamic data from SAM.gov and the Federal Procurement Data System to support market research that focuses on SDBs.
The Supplier Base Dashboard tracks the total number of entities that have done business with an agency; their size and socio-economic status; and the number of new, recent, and established vendors in the supplier base and in market categories and subcategories of interest.
The new procurement tools are helping implement executive orders passed by Biden in his first day in office, directing the federal government to use its power and dollars to advance racial equity and support underserved minorities.
Summary. In 2022, the average employee experienced 10 planned enterprise changes — such as a restructure to achieve efficiencies, a culture transformation to unlock new ways of working, or the replacement of a legacy tech system — up from two in 2016. While more change is coming, the workforce has hit a wall: A Gartner survey revealed that employees’ willingness to support enterprise change collapsed to just 43% in 2022, compared to 74% in 2016. Navigating the pandemic asked a lot of employees — and while they delivered, it came at a cost. Relentless sprinting means many employees are running on fumes. To create more sustainable change efforts, leaders must prioritize change initiatives, showing employees where to invest their energies. They also must manage change fatigue by building in periods of proactive rest, involving employees in change plans, and challenging managers to help build team resilience.close
Business transformation will remain at the forefront in 2023, as organizations continue to refine hybrid ways of working and respond to the urgent need to digitalize, while also contending with inflation, a continuing talent shortage, and supply-chain constraints. These circumstances, which require higher levels of productivity and performance, also mean a lot of change: In 2022, the average employee experienced 10 planned enterprise changes — such as a restructure to achieve efficiencies, a culture transformation to unlock new ways of working, or the replacement of a legacy tech system — up from two in 2016, according to Gartner research.
While more change is coming, the workforce has hit a wall: A Gartner survey revealed that employees’ willingness to support enterprise change collapsed to just 43% in 2022, compared to 74% in 2016.
We call the gap between the required change effort and employee change willingness the “transformation deficit.” Unless functional leaders steer swiftly and expertly, the transformation deficit will stymie organizations’ ambitions and undermine the employee experience, fueling decreased engagement and increased attrition.
The irony is that many of the goals of transformation — redesigning teams and structures, automating drudge activities, reengineering corporate culture — seek to ease burnout and fatigue and increase efficiency. Unfortunately, many leaders are approaching change management by applying short-term fixes, which is unsustainable.
The Big Pitfall: Moving Too Fast
The most common mistake when it comes to change management today is trying to build momentum for transformation by hitting the accelerator. A 2022 Gartner survey found that 75% of organizations are adopting a top-down approach to change, where leaders set the change strategy, create detailed implementation roadmaps, and deploy a high volume of change communications. Their goal is for workers to buy into the new path and for managers to lead the charge as champions and role models for their teams.
Unfortunately, navigating the pandemic asked a lot of employees — and while they delivered, it came at a cost. Relentless sprinting means many employees are running on fumes. Gartner research reveals the following:
Fifty-five percent of employees took a significant hit to their own health, their team relationships, and their work environment to sustain high performance through the disruption.
Only 36% of employees reported high trust in their organizations, with onsite workers reporting the least trust.
Half of employees reported struggling to find the information or people they needed to do their job on an ever-increasing volume of tasks.
Toward a More Sustainable Transformation
To get the most out of the change energy in your organization, Gartner analysis finds that leaders need to focus on two elements: prioritized change and managing fatigue.
Prioritized Change
Prioritized change means leaders show employees where to invest their energy by communicating their backlog of priorities, including change initiatives. Without such guidance, employees are likely to give 110% for each change, resulting in a blowout.
Many leadership teams already rank the most important organizational projects and initiatives, but that knowledge often isn’t shared beyond leadership team discussions. Communicating this more broadly can help teams more effectively manage their energy and efforts.
For example, IT leaders at The Cooperators, a Canadian insurance company, publish their priority progress list to all employees every month. The visibility helps employees understand the mechanics of the business, informing real and important judgments about where they should focus their attention.
Leaders must step back and consider the employee experience when determining the optimal speed for implementing change initiatives. For example, IT leaders at Sky Cable, a Filipino telecom, created guidelines for minimizing fatigue arising from a constant flow of technology changes. Their guidance includes “Design solutions to be visually like old solutions,” and “During periods of high change, minimize process changes that disrupt employee work.” They create a release calendar synched with the change efforts outside their own department. As a result, IT leaders can spot the best times to deploy new improvements.
Prioritized change can help leaders identify any changes that should be scrapped altogether. If a change is always at the bottom of your backlog and you continually delay it, it’s probably not critical.
Managing Change Fatigue
While organizational change management (OCM) is table stakes, fatigue management is a new change management muscle that executives must build. Three actions can prevent the risk of change fatigue:
1. Build in periods of proactive rest to sustain change energy.
Remote and hybrid working has collapsed the distinction between work and life. In 2022, “workers [were] still effectively giving away the equivalent of more than a working day (8.5 hours) of unpaid overtime each week: less than in 2021 but still more than pre-pandemic,” according to payroll company ADP. But Gartner analysis shows more time working does not result in higher performance.
Rest does increase performance — if it’s proactive. Organizations must rethink how they approach rest, embedding it into the workflow to prevent burnout. Proactive rest should have three characteristics:
Available: There is a robust set of options for employees to use to rest and stay charged. These could include no-meeting days, defined working hours, planned “down time” within projects, or all-company days off.
Accessible: Employees are encouraged to take advantage of the available tools and resources and to rest guilt-free.
Appropriate: Those rest tools meet the individual needs of employees.
According to Gartner research, rest that is available, accessible, and appropriate contributes to a 26% increase in employee performance and a tenfold reduction in the number of employees experiencing burnout.
2. Move away from a top-down approach and open source your change plans.
Involve employees in decision-making. This isn’t about allowing employees to vote on every change; it means finding ways to infuse the voice of those most impacted into your planning. Gartner research has found that this step alone can increase your change success by 15%. It makes change management a meritocracy, where you increase the odds that the best ideas and inputs are included in decision-making.
Shift implementation planning to employees.Leaders often don’t have enough visibility into the daily workflows of their teams to dictate a successful change approach. And leaving the workforce out of change implementation can increase resistance and failure. Gartner research has found that when employees own implementation planning, change success increases by 24%.
Engage in two-way conversations throughout the change process. Instead of focusing on how you’ll sell the change to employees, think of communications as a way to surface employee reactions. Holding regular, honest conversations about the change will allow employees to share their questions and opinions, which will drive understanding and make them feel like they’re part of the commitment to change. Gartner research has found that this step can increase change success by 32%.
3. Reimagine the role of managers in change.
Many managers are struggling to balance the needs of their leaders with the expectations of their employees. On top of this, managers are overwhelmed with change, too, making it difficult for them to effectively role model all the changes. Only 57% of managers report having enough capacity in their day-to-day work to support their teams through change.
Instead of asking managers to champion each and every change, leaders should instead challenge their managers to act as resilience builders. Managers who build their teams’ ability to self-navigate through change can increase employee sustainable performance by 29% and protect their own performance at the same time.
These managers know that they don’t always have the time or skills to demonstrate what change looks like. Instead, they ensure their teams learn by doing. They identify their employees’ strengths and motivations, and they connect them to colleagues with the relevant experience that they can best learn from.
Taken together, the strategies of prioritized change and fatigue management will advance the fuel economy of your 2023 transformation efforts, reducing drag and building momentum from employee energy.
Cian O Morain is a director of research in the Gartner HR practice. He focuses on organization design and change management. Cian leads research initiatives that explore how organizations can sustain employee engagement and productivity through change, and the design of workflows, structures and networks that enable employers to minimize work friction and maximize workforce agility and productivity.
Peter Aykens is a distinguished vice president and chief of research for the Gartner HR practice. He is responsible for setting the practice’s research agenda and strategy to address the mission critical priorities of HR leaders, including leadership, talent management, recruiting, diversity, equity and inclusion (DEI), total rewards, learning and development, and HR tech.
G tech giant IBM is launching a counterstrike in the industry’s suddenly-hot AI fight with today’s announcement of Watsonx, Axios’ Ryan Heath reports.
The big picture: Business-focused IBM claims its latest AI offering, set to launch in July, provides more accurate answers and takes a more responsible approach than rivals.
Microsoft, OpenAI and Google are rushing to lock down potentially massive new consumer markets for generative AI.
IBM is instead leaning into helping other companies implement their AI via a “data model factory” that offers IBM clients products tuned for their specialties in domains like language, code, chemistry and geospatial data.
Watsonx, in partnership with startup Hugging Face, incorporates open-source models; uses narrower, carefully culled datasets; and provides a “toolkit for governance.”
IBM’s top execs threw shadeon rivals at a Monday Watsonx preview.
Dario Gil, IBM’s head of research, said systems like ChatGPT are “not ready for primetime” thanks to “all sorts of random and made-up facts.”
CEO ArvindKrishna, who wasn’t at last week’s White House AI meeting, seemed pleased to be out of the firing line. He told Axios it gave him more time to court clients “who care a lot about accuracy.”
Between the lines: IBM knows about failed AI hype. It won headlines when its original Watson won Jeopardy in 2011, but after that the company’s revenue declined for 10 consecutive years — leaving Watsonx with a lot to prove.
Krishna addressed a range of AI topics, including…
America’s AI regulation debate:“The conversation has been lagging.” Krishna claimed credit for IBM helping to draft the EU’s upcoming rules, which focus on regulating high risk uses of AI.
New work category: “AI ops” covering activities like coding assistance and supply chain management.
Humans aren’t replaceable: “The systems still have years to go,” when it comes to “trying to replace a human being in their completeness.”
There’s no explainable AI: “Anybody who claims that a large AI model is explainable is not being completely truthful. They are not explainable in the sense of reasoning and logic.” But AI can transparently show its source data, and third parties can measure whether its answers show “bias with respect to gender, or age or ZIP code.”
What they’re saying: “AI may not replace managers, but managers that use AI will replace the managers that do not,” per Rob Thomas, IBM’s chief commercial officer.
The other side: IBM isn’t the only AI provider to claim the mantle of responsibility nor the only one targeting businesses, rather than consumers.
OpenAI has faced criticism for pushing GPT out to the world, but the public has embraced it. Meanwhile, even as it sets a fast industry pace, OpenAI — like many competitors — also touts its ethical scruples, pointing to its pre-deployment risk analysis and publicly grappling with the challenges of reducing harms.
Google took heat first for being too cautious in withholding the fruits of its AI research —then for an about-face that has seen it scrambling to ship generative-AI products.