healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

How to Avoid the Ethical Nightmares of Emerging Technology – HBR

Posted by timmreardon on 05/10/2023
Posted in: Uncategorized.

Summary.

Next-generation technologies are poised to cause society-shaking shifts at unprecedented speed and scale. Generative AI, quantum computing, blockchain, and other technologies present novel ethical problems that “business as usual” just can’t handle. To meet these challenges, leaders need to do something different: They must talk about ethics in direct, clear terms, and they must not only define their ethical nightmares but also explain how they’re going to prevent them. To prepare for the ethical challenges ahead, companies need to ensure their senior leaders understand these technologies and are aligned on the ethical risks, perform a gap and feasibility analysis, build a strategy, and implement it. All of this requires an important shift from thinking of our digital ethical nightmares as a technology problem to a leadership problem.

Facebook, which was created in 2004, amassed 100 million users in just four and a half years. The speed and scale of its growth was unprecedented. Before anyone had a chance to understand the problems the social media network could cause, it had grown into an entrenched behemoth.

In 2015, the platform’s role in violating citizens’ privacy and its potential for political manipulation was exposed by the Cambridge Analytica scandal. Around the same time, in Myanmar, the social network amplified disinformation and calls for violence against the Rohingya, an ethnic minority in the country, which culminated in a genocide that began in 2016. In 2021, the Wall Street Journal reported that Instagram, which had been acquired by Facebook in 2012, had conducted research showing that the app was toxic to the mental health of teenage girls.

Defenders of Facebook say that these impacts were unintended and unforeseeable. Critics claim that, instead of moving fast and breaking things, social media companies should have proactively avoided ethical catastrophe. But both sides agree that new technologies can give rise to ethical nightmares, and that should make business leaders — and society — very, very nervous.

We are at the beginning of another technological revolution, this time with generative AI — models that can produce text, images, and more. It took just two months for OpenAI’s ChatGPT to pass 100 million users. Within six months of its launch, Microsoft released ChatGPT-powered Bing; Google demoed its latest large language model (LLM), Bard; and Meta released LLaMA. ChatGPT-5 will likely be here before we know it. And unlike social media, which remains largely centralized, this technology is already in the hands of thousands of people. Researchers at Stanford recreated ChatGPT for about $600 and made their model, called Alpaca, open-source. By early April, more than 2,400 people had made their own versions of it.

While generative AI has our attention right now, other technologies coming down the pike promise to be just as disruptive. Quantum computing will make today’s data crunching look like kindergarteners counting on their fingers. Blockchain technologies are being developed well beyond the narrow application of cryptocurrency. Augmented and virtual reality, robotics, gene editing, and too many others to discuss in detail also have the potential to reshape the world for good or ill.

If precedent serves, the companies ushering these technologies into the world will take a “let’s just see how this goes” approach. History also suggests this will be bad for the unsuspecting test subjects: the general public. It’s hard not to worry that, alongside the benefits they’ll offer, the leaps in technology will come with a raft of societal-level harm that we’ll spend the next 20-plus years trying to undo.

It’s time for a new approach. Companies that develop these technologies need to ask: “How do we develop, apply, and monitor them in ways that avoid worst-case scenarios?” And companies that procure these technologies and, in some cases, customize them (as businesses are doing now with ChatGPT) face an equally daunting challenge: “How do we design and deploy them in a way that keeps people (and our brand) safe?”

In this article, I will try to convince you of three things: First, that businesses need to explicitly identify the risks posed by these new technologies as ethical risks or, better still, as potential ethical nightmares. Ethical nightmares aren’t subjective. Systemic violations of privacy, the spread of democracy-undermining misinformation, and serving inappropriate content to children are on everyone’s “that’s terrible” list. I don’t care which end of the political spectrum your company falls on — if you’re Patagonia or Hobby Lobby — these are our ethical nightmares.

Second, that by virtue of how these technologies work — what makes them tick — the likelihood of realizing ethical and reputational risks has massively increased.

Third, that business leaders are ultimately responsible for this work, not technologists, data scientists, engineers, coders, or mathematicians. Senior executives are the ones who determine what gets created, how it gets created, and how carefully or recklessly it is deployed and monitored.

These technologies introduce daunting possibilities, but the challenge of facing them isn’t that complicated: Leaders need to articulate their worst-case scenarios — their ethical nightmares — and explain how they will prevent them. The first step is to get comfortable talking about ethics.

Business Leaders Can’t Be Afraid to Say “Ethics”

After 20 years in academia, 10 of them spent researching, teaching, and publishing on ethics, I attended my first nonacademic conference in 2018. It was sponsored by a Fortune 50 financial services company, and the theme was “sustainability.” Having taught courses on environmental ethics, I thought it would be interesting to see how corporations think about their responsibilities vis-à-vis their environmental impacts. When I got there, I found presentations on educating women around the globe, lifting people out of poverty, and contributing to the mental and physical health of all. Few were talking about the environment.

It took me an embarrassingly long time to figure out that in the corporate and nonprofit worlds, “sustainability” doesn’t mean “practices that don’t destroy the environment for future generations.” Instead it means “practices in pursuit of ethical goals” and an assertion that those practices promote the bottom line. As for why businesses didn’t simply say “ethics,” I couldn’t understand.

This behavior — of replacing the word “ethics” with some other, less precise term — is widespread. There’s Environmental, Social, and Governance (ESG) investing, which boils down to investing in companies that avoid ethical risks (emissions, diversity, political actions, and the like) on the theory that those practices protect profits. Some companies claim to be “values driven,” “mission driven,” or “purpose driven,” but these monikers rarely have anything to do with ethics. “Customer obsessed” and “innovation” aren’t ethical values; a purpose or mission can be completely amoral (putting immoral to the side). So-called “stakeholder capitalism” is capitalism tempered by a vague commitment to the welfare of unidentified stakeholders (as though stakeholder interests do not conflict). Finally, the world of AI ethics has grown tremendously over the last five years or so. Corporations heard the call, “We want AI ethics!” Their distorted response is, “Yes, we, too, are for responsible AI!”

Ethical challenges don’t disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively. Does sustainability advise against using personal data for the purposes of targeted marketing? When does using a black box model violate ESG criteria? What happens if your mission of connecting people also happens to connect white nationalists?

Let’s focus on the move from “AI ethics” to “responsible AI” as a case study on the problematic impacts of shifting language. First, when business leaders talk about “responsible” and “trustworthy” AI, they focus on a broad set of issues that include cybersecurity, regulation, legal concerns, and technical or engineering risks. These are important, but the end result is that technologists, general counsels, risk officers, and cybersecurity engineers focus on areas they are already experts on, which is to say, everything exceptethics.

Second, when it comes to ethics, leaders get stuck at very high-level and abstract principles or values — on concepts such as fairness and respect for autonomy. Since this is only a small part of the overall “responsible AI” picture, companies often fail to drill down into the very real, concrete ways these questions play out in their products. Ethical nightmares that outstrip outdated regulations and laws are left unidentified, and just as probable as they were before a “responsible AI” framework is deployed.

Third, the focus on identifying and pursuing “responsible AI” gives companies a vague goal with vague milestones. AI statements from organizations say things like, “We are for transparency, explainability, and equity.” But no company is transparent about everything with everyone (nor should it be); not every AI model needs to be explainable; and what counts as equitable is highly contentious. No wonder, then, that the companies that “commit” to these values quickly abandon them. There are no goals here. No milestones. No requirements. And there’s no articulation of what failure looks like.

But when AI ethics fail, the results are specific. Ethical nightmares are vivid: “We discriminated against tens of thousands of people.” “We tricked people into giving up all that money.” “We systematically engaged in violating people’s privacy.” In short, if you know what your ethical nightmares are then you know what ethical failure looks like.

Where Digital Nightmares Come From

Understanding how emerging technologies work — what makes them tick — will help explain why the likelihood of realizing ethical and reputational risks has massively increased. I’ll focus on three of the most important ones.

Artificial intelligence.

Let’s start with a technology that has taken over the headlines: artificial intelligence, or AI. The vast majority of AI out there is machine learning (ML).

“Machine learning” is, at its simplest, software that learns by example. And just as people learn to discriminate on the basis of race, gender, ethnicity, or other protected attributes by following examples around them, software does, too.

Say you want to train your photo recognition software to recognize pictures of your dog, Zeb. You give that software lots of examples and tell it, “That’s Zeb.” The software “learns” from those examples, and when you take a new picture of your dog, it recognizes it as a picture of Zeb and labels the photo “Zeb.” If it’s not a photo of Zeb, it will label the file “not Zeb.” The process is the same if you give your software examples of what “interview-worthy” résumés look like. It will learn from those examples and label new résumés as being “interview-worthy” or “not interview-worthy.” The same goes for applications to university, or for a mortgage, or for parole.

Read more on this topic

How Generative AI Is Changing Work

In each case, the software is recognizing and replicating patterns. The problem is that sometimes those patterns are ethically objectionable. For instance, if the examples of “interview-worthy” résumés reflect historical or contemporary biases against certain races, ethnicities, or genders, then the software will pick up on it. Amazon once built a résumé-screening AI. And to determine parole, the U.S. criminal justice system has used prediction algorithms that replicated historical biases against Black defendants.

It’s crucial to note that the discriminatory pattern can be identified and replicated independently of the intentions of the data scientists and engineers programming the software. In fact, data scientists at Amazon identified the problem with their AI mentioned above and tried to fix it, but they couldn’t. Amazon decided, rightly, to scrap the project. But had it been deployed, an unwitting hiring manager would have used a tool with ethically discriminatory operations, regardless of that person’s intentions or the organization’s stated values.

Discriminatory impacts are just one ethical nightmare to avoid with AI. There are also privacy concerns, the danger of AI models (especially large language models like ChatGPT) being used to manipulate people, the environmental cost of the massive computing power required, and countless other use-case-specific risks.

Quantum computing.

The details of quantum computers are exceedingly complicated, but for our purposes, we need to know only that they are computers that can process a tremendous amount of data. They can perform calculations in minutes or even seconds that would take today’s best supercomputers thousands of years. Companies like IBM and Google are pouring billions of dollars into this hardware revolution, and we’re poised to see increased quantum computer integration into classical computer operations every year.

Quantum computers throw gasoline on a problem we see in machine learning: the problem of unexplainable, or black box, AI. Essentially, in many cases, we don’t know why an AI tool makes the predictions that it does. When the photo software looks at all those pictures of Zeb, it’s analyzing those pictures at the pixel level. More specifically, it’s identifying all those pixels and the thousands of mathematical relations among those pixels that constitute “the Zeb pattern.” Those mathematical Zeb patterns are phenomenally complex — too complex for mere mortals to understand — which means that we don’t understand why it (correctly or incorrectly) labeled this new photo “Zeb.” And while we might not care about getting explanations in the case of Zeb, if the software says to deny someone an interview (or a mortgage, or insurance, or admittance) then we might care quite a bit.

Quantum computing makes black box models truly impenetrable. Right now, data scientists can offer explanations of an AI’s outputs that are simplified representations of what’s actually going on. But at some point, simplification becomes distortion. And because quantum computers can process trillions of data points, boiling that process down to an explanation we can understand — while retaining confidence that the explanation is more or less true — becomes vanishingly difficult.

That leads to a litany of ethical questions: Under what conditions can we trust the outputs of a (quantum) black box model? What are the appropriate benchmarks for performance? What do we do if the system appears to be broken or is acting very strangely? Do we acquiesce to the inscrutable outputs of the machine that has proven reliable previously? Or do we eschew those outputs in favor of our comparatively limited but intelligible human reasoning?

Blockchain.

Suppose you and I and a few thousand of our friends each have a magical notebook with the following features: When someone writes on a page, that writing simultaneously appears in everyone else’s notebook. Nothing written on a page can ever be erased. The information on the pages and the order of the pages is immutable; no one can remove or rearrange the pages. A private, passphrase-protected page lists your assets — money, art, land titles — and when you transfer an asset to someone, both your page and theirs are simultaneously and automatically updated.

At a very high level, this is how blockchain works. Each blockchain follows a specific set of rules that are written into its code, and changes to those rules are decided by whomever runs the blockchain. But just like any other kind of management, the quality of a blockchain’s governance depends on answering a string of important questions. For example: What data belongs on the blockchain, and what doesn’t? Who decides what goes on? What are the criteria for what is included? Who monitors? What’s the protocol if an error is found in the code of the blockchain? Who makes decisions about whether a structural change should be made to a blockchain? How are voting rights and power distributed?

Bad governance in blockchain can lead to nightmare scenarios, like people losing their savings, having information about themselves disclosed against their wills, or false information loaded onto people’s asset pages that enables deception and fraud.

Blockchain is most often associated with financial services, but every industry stands to integrate some kind of blockchain solution, each of which comes with particular pitfalls. For instance, we might use blockchain to store, access, and distribute information related to patient data, the inappropriate handling of which could lead to the ethical nightmare of widescale privacy violations. Things seem even more perilous when we recognize that there isn’t just one type of blockchain, and that there are different ways of governing a blockchain. And because the basic rules of a given blockchain are very hard to change, early decisions about what blockchain to develop and how to maintain it are extremely important.

These Are Business, Not (Only) Technology, Problems

Companies’ ability to adopt and use these technologies as they evolve will be essential to staying competitive. As such, leaders will have to ask and answer questions such as:

  • What constitutes an unfair or unjust or discriminatory distribution of goods and services?
  • Is using a black box model acceptable in this context?
  • Is the chatbot engaging in ethically unacceptable manipulation of users?
  • Is the governance of this blockchain fair, reasonable, and robust?
  • Is this augmented reality content appropriate for the intended audience?
  • Is this our organization’s responsibility or is it the user’s or the government’s?
  • Does this place an undue burden on users?
  • Is this inhumane?
  • Might this erode confidence in democracy when used or abused at scale?

Why does this responsibility fall to business leaders as opposed to, say, the technologists who are tasked with deploying the new tools and systems? After all, most leaders aren’t fluent in the coding and the math behind software that learns by example, the quantum physics behind quantum computers, and the cryptography that underlies blockchain. Shouldn’t the experts be in charge of weighty decisions like these?

The thing is, these aren’t technical questions — they’re ethical, qualitative ones. They are exactly the kinds of problems that business leaders — guided by relevant subject matter experts — are charged with answering. Off-loading that responsibility to coders, engineers, and IT departments is unfair to the people in those roles and unwise for the organization. It’s understandable that leaders might find this task daunting, but there’s no question that they’re the ones responsible.

The Ethical Nightmare Challenge

I’ve tried to convince you of three claims. First, that leaders and organizations need to explicitly identify their ethical nightmares springing from new technologies. Second, a significant source of risk lies in how these technologies work. And third, that it’s the job of senior executives to guide their respective organizations on ethics.

These claims fund a conclusion: Organizations that leverage digital technologies need to address ethical nightmares before they hurt people and brands. I call this the “ethical nightmare challenge.” To overcome it, companies need to create an enterprise-wide digital ethical risk program. The first part of the program — what I call the content side — asks: What are the ethical nightmares we’re trying to avoid, and what are their potential sources? The second part of the program — what I call the structure side — answers the question: How do we systematically and comprehensively ensure those nightmares don’t become a reality?

Content.

Ethical nightmares can be articulated with varying levels of detail and customization. Your ethical nightmares are partly informed by the industry you’re in, the kind of organization you are, and the kinds of relationships you need to have with your clients, customers, and other stakeholders for things to go well. For instance, if you’re a health care provider that has clinicians using ChatGPT or another LLM to make diagnoses and treatment recommendations, then your ethical nightmare might include widespread false recommendations that your people lack the training to spot. Or if your chatbot is undertrained on information related to particular races and ethnicities, and neither the developers of the chatbot nor the clinicians know this, then your ethical nightmare would be systematically giving false diagnoses and bad treatments to those who have already been discriminated against. If you’re a financial services company that uses blockchain to transact on behalf of clients, then one ethical nightmare might be the absence of an ability to correct mistakes in the code — a function of ill-defined governance of the blockchain. That could mean, for instance, being unable to call back fraudulent transfers.

Notice that articulating nightmares means naming details and consequences. The more specific you can get — which is a function of your knowledge of the technologies, your industry, your understanding of the various contexts in which your technologies will be deployed, your moral imagination, and your ability to think through the ethical implications of business operations — the easier it will be to build the appropriate structure to control for these things.

Structure.

While the methods for identifying the nightmares hold across organizations, the strategies for creating appropriate controls vary depending on the size of the organization, existing governance structures, risk appetites, management culture, and more. Companies’ overtures into this realm can be classified as either formal or informal. In an ideal world, every organization would take the formal approach. However, factors like limited time and resources, the rate at which a company (truly or falsely) believes it will be impacted by digital technologies, and business necessities in an unpredictable market sometimes make it reasonable to choose the informal approach. In those cases, the informal approach should be seen as a first step, and better than nothing at all.

The formal approach is systematic and comprehensive, and it takes a good deal of time and resources to build. In short, it centers around the ability to create and execute on an enterprise-wide digital ethical risk strategy. Broadly speaking, it involves four steps.

Education and alignment. First, all senior leaders need to understand the technologies enough that they can agree on what constitutes the ethical nightmares of the organization. Knowledge and the alignment of leaders are prerequisites for building and implementing a robust digital ethical risk strategy.

This education can be achieved by executive briefings, workshops, and seminars. But it should not require — or try to teach — math or coding. This process is for non-technologists and technologists alike to wrap their heads around what risks their company may face. Moreover, it must be about the ethical nightmares of the organization, not sustainability or ESG criteria or “company values.”

Gap and feasibility analyses.Before building a strategy, leaders need to know what their organization looks like and what the probability is of their nightmares actually happening. As such, the second step consists of performing gap and feasibility analyses of where your organization is now; how far away it is from sufficiently safeguarding itself from an ethical nightmare unfolding; and what it will take in terms of people, processes, and technology to close those gaps.

To do this, leaders must identify where their digital technologies are and where they will likely be designed or procured within their organization. Because if you don’t know how the technologies work, how they’re used, or where they’re headed, you’ll have no hope of avoiding the nightmares.

Then a variety of questions present themselves:

  • What policies are in place that address or fail to address your ethical nightmares?
  • What processes are in place to identify ethical nightmares? Do they need to be augmented? Are new processes required?
  • What level of awareness do employees have of these digital ethical risks? Are they capable of detecting signs of problems early? Does the culture make it safe for them to speak up about possible red flags?
  • When an alarm is sounded, who responds, and on what grounds do they decide how to move forward?
  • How do you operationalize and harmonize digital ethical risk assessment relative to existing enterprise-risk categories and operations?

The answers to questions like these will vary wildly across organizations. It’s one reason why digital ethical risk strategies are difficult to create and implement: They must be customized to integrate with existing governance structures, policies, processes, workflows, tools, and personnel. It’s easy to say “everyone needs a digital ethical risk board,” in the model of the institutional review boards that arose in medicine to mitigate the ethical risks around research on human subjects. But it’s not possible to continue with “and every one of them should look like this, act like this, and act with other groups in the business like this.” Here, good strategy does not come from a one-size-fits-all solution.

Strategy creation. The third step in the formal approach is building a corporate strategy in light of the gap and feasibility analyses. This includes, among other things, refining goals and objectives, deciding on an approach to metrics and KPIs (for measuring both compliance with the digital ethical risk program and its impact), designing a communications plan, and identifying key drivers of success for implementation.

Cross-functional involvement is needed. Leaders from technology, risk, compliance, general counsel, and cybersecurity should all be involved. Just as important, direction should come from the board and the CEO. Without their robust buy-in and encouragement, the program will get watered down.

Implementation. The fourth and final step is implementation of the strategy, which entails reconfiguring workflows, training, support, and ongoing monitoring, including quality assurance and quality improvement.

For example, new procedures should be customized by business domain or by roles to harmonize them with existing procedures and workflows. These procedures should clearly define the roles and responsibilities of different departments and individuals and establish clear processes for identifying, reporting, and addressing ethical issues. Additionally, novel workflows need to seek an optimal balance of human-computer interaction, which will depend on the kinds of tasks and the relative risks involved, and establish human oversight of automated flows.

The informal approach, by contrast, usually involves the following endeavors: providing education and alignment on ethical nightmares by leaders; entrusting executives in distinct units of the business (such as HR, marketing, product lines, or R&D) with identifying the processes needed to complete an ethical nightmare check; and creating or leveraging an existing (ethical) risk board to advise various personnel — either on individual projects or at a more institutional scale — when ethical risks are detected.

Article link: https://hbr.org/2023/05/how-to-avoid-the-ethical-nightmares-of-emerging-technology?

6 Key Levers of a Successful Organizational Transformation- HBR

Posted by timmreardon on 05/10/2023
Posted in: Uncategorized.
  • Andrew White,
  • Michael Wheelock,
  • Adam Canwell,
  • Michael Smets

May 10, 2023

Summary.

Organizational transformations are difficult on a personal level for everyone involved. A team of researchers found that in successful transformations, leaders not only made sure their teams had the processes, resources, and technology they needed — they also built the right emotional conditions. These leaders offered a compelling rationale driving the transformation, and they ensured employees had the emotional support they needed to execute. This meant that when the going inevitably got tough, employees felt appropriately challenged and ultimately energized by the stress. By contrast, leaders of the unsuccessful transformations didn’t make the same emotional investment. When their teams hit the inevitable challenges, negative emotions spiked, and the team entered a downward spiral. Leaders lost faith and looked to distance themselves from the project, which led employees to do the same. The researchers identified six behaviors that consistently improved the odds of transformation success.

Disruption used to be an exceptional event that hit an unlucky few companies — think of the likes of Kodak, Polaroid, and Blackberry. But in today’s complex and uncertain world, as we face challenges ranging from climate change to digitization, geopolitics to DEI, organizations must treat transformation as a core capability to master, as opposed to a one-off event.

At the same time, leaders must recognize that transformation is fraught with risk. In 1995, John Kotter found that 70% of organizational transformations fail, and nearly three decades later, not much has changed. Our own research, in which we spoke to more than 900 C-suite managers and more than 1,100 employees who had gone through a corporate transformation, showed similar results: 67% of leaders told us they had experienced at least one underperforming transformation in the last five years.

Considering that organizations will spend billions on transformation initiatives over the next year, a 70% failure rate equates to a significant erosion of value. So, what can leaders do to tilt the odds of success in their favor? To find out, we interviewed 30 leaders of transformations and surveyed more than 2,000 senior leaders and employees in 23 countries and 16 sectors. Half of our respondents had been involved in a successful transformation, while the other half had experienced an unsuccessful transformation.

So what tactics did the leaders of successful transformations use to manage the emotional journey? To find out, we built a model to predict the likelihood that an organization will achieve its transformation KPIs based on the extent to which it exhibited 50 behaviors across 11 areas of the transformation. This model revealed that behaviors in six of these areas consistently improved the odds of transformation success. Organizations that are above average in these areas have a 73% chance of meeting or exceeding their transformation KPIs, compared to only a 28% chance for organizations that are below average. Our research suggests that any organization that can effectively implement these six levers will maximize their chances of success.

Our research also found that a key difference in successful transformations was that leaders embraced their employees’ emotional journey. Fifty-two percent of respondents involved in successful transformations said their organization provided the emotional support they needed during the transformation process “to a significant extent” (as opposed to 27% of respondents who were involved in unsuccessful transformations).

Insight Center Collection Managing ChangeStrategies for transformation at midsize companies and beyond. 

Transformations are extremely difficult on a personal level for everyone involved. In the successes we studied, leaders not only made sure their teams had the processes, resources, and technology they needed — they also built the right emotional conditions. These leaders offered a compelling rationale driving the transformation, and they ensured employees had the emotional support they needed to execute. This meant that when the going inevitably got tough, employees felt appropriately challenged and ultimately energized by the stress.

By contrast, leaders of the unsuccessful transformations didn’t make the same emotional investment. When their teams hit the inevitable challenges, negative emotions spiked, and the team entered a downward spiral. Leaders lost faith and looked to distance themselves from the project, which led employees to do the same.

The Six Key Levers of Transformations

So what tactics did the leaders of successful transformations use to manage the emotional journey? The six levers that maximize the chances of success, according to our research are:

1. Leadership’s own willingness to change 

Many people believe that a leader’s job is to look outward and give others guidance, but our research suggests that to help their workforce navigate a transformation, leaders need to look inward first and examine their own relationship with change. “If you are not ready to change yourself, forget about changing your team and your organization,” as Dr. Patrick Liew, executive chairman at GEX Ventures, told us.

In our interviews, leaders spoke of working on their own development, including engaging more with their emotions and becoming accustomed to the discomfort that accompanies personal growth. Leaders needed to “look into a mirror,” as one told us, and realize that they were part of the problem before the shift to a positive trajectory could take place. They needed to remove their own fear before they could help their employees get through this change.

“As someone who was tasked to lead this [transformation], if I’m being honest with you, it was pretty unsettling at the start, because I think by nature most of us like to know the path we’re going on,” as one COO from the automative industry told us. And a senior vice president in the global business services industry described needing to become more vulnerable and honest on their path to self-discovery: “I think I became even more aware of myself, who I am.”

2. A shared vision of success

Creating a unified vision of future success is another all-important foundation point of a transformation. In our research, 50% of respondents involved in successful transformations said the vision energized and inspired them to go the extra mile to a significant extent (as compared 29% of respondents in low-performing transformations).

Employees must understand the urgent need to disrupt the status quo. A compelling “why” can help them navigate the inevitable challenges that will arise during a transformation program. Many of the workers who took our survey said that they “wanted” and “needed” the vision to be communicated clearly. When leaders share a clear vision, the workforce is more likely to get on board. But if people don’t understand the vision or need for transformation, success is hard to achieve.

“It’s not about me telling people ‘This is what’s going to happen,’” as a managing director in the medical device industry told us. “It’s about me creating this shared sense of ownership…and then [coaching] my team on what they need to achieve. We very consciously want our teams to really buy into this is how we, as a collective, want to work.”

3. A culture of trust and psychological safety

Trust and care from leaders can make a difficult transformation more emotionally manageable. At the most basic human level, we all know what it feels like to be seen, listened to, and heard by another person. It can validate our effort, motivate us to work harder, and help assuage emotions like doubt, fear, anger, and sadness. Workers in our study shared that they wanted leaders who were patient and who also had, in the words of one employee, a “calm and teachable spirit.”

In a workplace with a high degree of psychological safety, employees feel confident that they can share their honest opinions and concerns without fear of retribution. When trust and psychological safety are missing, it’s difficult to persuade your workforce to make necessary changes. For example, one senior leader told us that employees at their company were extremely fearful of the transformation and didn’t feel that they could speak up about the problems they saw. Not surprisingly, the transformation did not go well.

4. A process that balances execution and exploration

Transformations obviously need disciplined project management to drive the program forward. But our research showed that leaders of successful transformations created processes that balanced the need to execute with giving employees the freedom to explore, express creativity, and let new ideas emerge. This empowers the workforce to identify solutions or opportunities that better meet the long-term goals of the transformation.

“Innovation requires the right people and processes,” said one respondent to our anonymous survey. “Both are critical to encourage collaboration and experimentation.”

We also found that creating space for small failures can ultimately lead to big success, whereas fear of any failure can lead to missed opportunities. Forty-eight percent of our respondents involved in successful transformations said the process was designed so that failed experimentation would not negatively impact their career or compensation to a significant extent. By contrast, only 29% of respondents in unsuccessful transformations said the same.

5. A recognition that technology carries its own emotional journey

The leaders in our study ranked technology as the biggest challenge they faced in their transformation efforts. There are a lot of emotions to manage when new systems or technology are introduced, from stress over how it works to fear about whether it will cause job loss or slow down the system.

In the underperforming transformations we studied, we saw the narrative shift away from the vision to focus on the technology itself. Whereas in the successful transformations, leaders ensured that technology was seen as the means to achieve the strategic vision. Furthermore, they prioritized quick implementations of new technology — focusing on a minimum viable product rather than perfect implementation. Lastly, they invested resources into skill development to ensure the workforce was ready to create value using the new technology.

“There were kickoff sessions with our senior managers to bring them in at the beginning of the process,” a vice president of a company in the media/advertising industry explained. “These sessions aimed to show them that what was being built was something that they had helped design, rather than something that was presented to them as a fait accompli…This minimized the numbers of active detractors.”

6. A shared sense of ownership over the outcome

In the successful transformations we studied, leaders and employees worked together to co-create an environment where everyone felt a shared sense of ownership over the transformation vision and outcome.

A prime example of this is many companies’ rapid shift to virtual and remote working during the pandemic. Because of the speed and urgency of the change, leaders needed to collaborate closely with the workforce to create new ways of working and be much more responsive to their views on what was or wasn’t going well. This mass co-creation helped build a sense of pride and shared ownership across both leadership and the workforce.

“In a transformation, things pop up all the time,” as Christiane Wijsen, head of corporate strategy at Boehringer Ingelheim, told us. “When you have a movement around you, supporters will buffer it and tweak it each time. When you don’t have this movement, then you’re alone.”

. . .

To conclude, it’s worth reiterating that all transformations are tough. Even during successful programs, there will come a time where people start to feel stressed. The skill at this difficult stage is being able to energize your workforce and turn that heightened pressure into something productive, as opposed to letting the transformation spiral downward into pessimism and underperformance.

What we saw throughout our research is that leaders who are truly working with their employees are much more successful. They acknowledge and manage emotions, rather than pushing them aside or ignoring them. The best leaders create vision across the organization and a safe environment to work together and listen to each other.

“You’ve got to be very, very respectful of people at a working level,” as Thomas Sebastian, CEO of London Market Joint Venture at DXC Technology, told us. “You’ve got to understand the emotional side and consider a completely different perspective, such as how is this transformation going to make their life easier.”

Success begets success. Once a workforce has undergone a successful transformation, they will be ready to go again. And given the pace of change in the world, organizations have got to be ready to go again.

  • Andrew White is a senior fellow in management practice at Saïd Business School, University of Oxford, where he directs the advanced management and leadership program and conducts research into leadership and transformation. He is also a coach for CEOs and their senior teams.
  • MWMichael Wheelock leads a primary research and advanced analytics team in EY Knowledge. His team designs and delivers global, mixed methods research programs to support EY’s flagship thought leadership.
  • ACAdam Canwell is head of EY’s global leadership consulting practice. Adam has published extensively on leadership and strategic change. Adam has sold and delivered transformation programs across multiple industries in both the UK and Australia, working with FTSE 100 (or their equivalent) organizations.
  • Michael Smets is a professor of management at Saïd Business School, University of Oxford. His work focuses on leadership, transformation, and institutional change.

Article link; https://hbr.org/2023/05/6-key-levers-of-a-successful-organizational-transformation?

A Radical Plan to Make AI Good, Not Evil – Wired

Posted by timmreardon on 05/10/2023
Posted in: Uncategorized.

OpenAI competitor Anthropic says its Claude chatbot has a built-in “constitution” that can instill ethical principles and keep systems from going rogue.

IT’S EASY TO freak out about more advanced artificial intelligence—and much more difficult to know what to do about it. Anthropic, a startup founded in 2021 by a group of researchers who left OpenAI, says it has a plan. 

Anthropic is working on AI models similar to the one used to power OpenAI’s ChatGPT. But the startup announced today that its own chatbot, Claude, has a set of ethical principles built in that define what it should consider right and wrong, which Anthropic calls the bot’s “constitution.” 

Jared Kaplan, a cofounder of Anthropic, says the design feature shows how the company is trying to find practical engineering solutions to sometimes fuzzy concerns about the downsides of more powerful AI. “We’re very concerned, but we also try to remain pragmatic,” he says. 

Anthropic’s approach doesn’t instill an AI with hard rules it cannot break. But Kaplan says it is a more effective way to make a system like a chatbot less likely to produce toxic or unwanted output. He also says it is a small but meaningful step toward building smarter AI programs that are less likely to turn against their creators.

The notion of rogue AI systems is best known from science fiction, but a growing number of experts, including Geoffrey Hinton, a pioneer of machine learning, have argued that we need to start thinking now about how to ensure increasingly clever algorithms do not also become increasingly dangerous. 

The principles that Anthropic has given Claude consist of guidelines drawn from the United Nations Universal Declaration of Human Rightsand suggested by other AI companies, including Google DeepMind. More surprisingly, the constitution includes principles adapted from Apple’s rules for app developers, which bar “content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy,” among other things.

The constitution includes rules for the chatbot, including “choose the response that most supports and encourages freedom, equality, and a sense of brotherhood”; “choose the response that is most supportive and encouraging of life, liberty, and personal security”; and “choose the response that is most respectful of the right to freedom of thought, conscience, opinion, expression, assembly, and religion.”

Anthropic’s approach comes just as startling progress in AIdelivers impressively fluent chatbots with significant flaws. ChatGPT and systems like it generate impressive answers that reflect more rapid progress than expected. But these chatbots also frequently fabricate information, and can replicate toxic languagefrom the billions of words used to create them, many of which are scraped from the internet.

One trick that made OpenAI’s ChatGPT better at answering questions, and which has been adopted by others, involves having humans grade the quality of a language model’s responses. That data can be used to tune the model to provide answers that feel more satisfying, in a process known as “reinforcement learning with human feedback” (RLHF). But although the technique helps make ChatGPT and other systems more predictable, it requires humans to go through thousands of toxic or unsuitable responses. It also functions indirectly, without providing a way to specify the exact values a system should reflect.

nthropic’s new constitutional approach operates over two phases. In the first, the model is given a set of principles and examples of answers that do and do not adhere to them. In the second, another AI model is used to generate more responses that adhere to the constitution, and this is used to train the model instead of human feedback.

“The model trains itself by basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic,” Kaplan says.

“It’s a great idea that seemingly led to a good empirical result for Anthropic,” says Yejin Choi, a professor at the University of Washington who led a previous experiment that involved a large language model giving ethical advice. 

Choi says that the approach will work only for companies with large models and plenty of compute power. She adds that it is also important to explore other approaches, including greater transparency around training data and the values that models are given. “We desperately need to involve people in the broader community to develop such constitutions or datasets of norms and values,” she says.

Thomas Dietterich, a professor at Oregon State University who is researching ways of making AI more robust, says Anthropic’s approach looks like a step in the right direction. “They can scale feedback-based training much more cheaply and without requiring people—data labelers—to expose themselves to thousands of hours of toxic material,” he says

Dietterich adds it is especially important that the rules Claude adheres to can be inspected by those working on the system as well as outsiders, unlike the instructions that humans give a model through RLHF. But he says that the method does not completely eradicate errant behavior. Anthropic’s model is less likely to come out with toxic or morally problematic answers, but it is not perfect.

The idea of giving AI a set of rules to follow might seem familiar, having been put forward by Isaac Asimov in a series of science fiction stories that proposed Three Laws of Robotics. Asimov’s stories typically centered on the fact that the real world often presented situations that created a conflict between individual rules.

Kaplan of Anthropic says that modern AI is actually quite good at handling this kind of ambiguity. “The strange thing about contemporary AI with deep learning is that it’s kind of the opposite of the sort of 1950s picture of robots, where these systems are, in some ways, very good at intuition and free association,” he says. “If anything, they’re weaker on rigid reasoning.”

Anthropic says other companies and organizations will be able to give language models a constitution based on a research paper that outlines its approach. The company says it plans to build on the method with the goal of ensuring that even as AI gets smarter, it does not go rogue.

Updated 5-9-2023, 3:20 pm EDT: Thomas Dietterich is at Oregon State University, not the University of Oregon.

Article link: https://www.wired.com/story/anthropic-ai-chatbots-ethics/?

White House and GSA launch platforms to improve equity in federal procurement – Fedscoop

Posted by timmreardon on 05/09/2023
Posted in: Uncategorized.

The tools, which launched earlier this spring, are intended to help agencies find businesses that are new to the federal marketplace and track equity goals.

BYNIHAL KRISHAN MAY 8, 2023

The White House and General Services Administration on Monday announced two platforms for federal agencies to improve equity in procurement through a new government-wide procurement equity tool and a supplier base dashboard. 

The tools, which launched earlier this spring, are intended to help agencies find businesses that are new to the federal marketplace, identify qualified vendors, and track agency progress toward equity in procurement goals.

They are intended to help achieve the Biden Administration’s federal contract spend goal for small disadvantaged businesses that has been increased to 15% by 2025 while the Office of Management and Budget (OMB) set a target that 12% of contracting dollars in fiscal 2023 go to small disadvantaged businesses.

“These two tools are going to help agencies make more connections with the diverse array of businesses offering their products in the federal marketplace,” said GSA Administrator Robin Carnahan. “By providing our federal partners with more information when they make procurement decisions, we’re better able to set ourselves up to achieve our contracting goals and create more equity in the marketplace for everyone.”

The tools, some of which will require government accounts, will support achieving equity goal by improving access to procurement opportunities for Small Disadvantaged Businesses (SDBs), Women-Owned Small Businesses (WOSBs), Service-Disabled Veteran-Owned Small Businesses (SDVOSBs), and Historically Underutilized Business Zone (HUBZone) Small Businesses.

​​“We’re committed to helping the acquisition workforce strengthen stewardship and efficiency in the federal procurement process while simultaneously advancing equity,” said OMB’s Associate Administrator of the Office of Federal Procurement Policy Mathew Blum. “We can maximize the power of procurement as a catalyst to help address our nation’s top priorities.”

The Government-wide Procurement Equity Tool uses dynamic data from SAM.gov and the Federal Procurement Data System to support market research that focuses on SDBs.

The Supplier Base Dashboard tracks the total number of entities that have done business with an agency; their size and socio-economic status; and the number of new, recent, and established vendors in the supplier base and in market categories and subcategories of interest.

The new procurement tools are helping implement executive orders passed by Biden in his first day in office, directing the federal government to use its power and dollars to advance racial equity and support underserved minorities.

Article link: https://fedscoop.com/white-house-and-gsa-launch-platforms-to-improve-equity-in-federal-procurement/?

Employees Are Losing Patience with Change Initiatives – HBR

Posted by timmreardon on 05/09/2023
Posted in: Uncategorized.
  • Cian O Morain
  • Peter Aykens

May 09, 2023

Illustration by Andrei Cojocaru

Summary.   In 2022, the average employee experienced 10 planned enterprise changes — such as a restructure to achieve efficiencies, a culture transformation to unlock new ways of working, or the replacement of a legacy tech system — up from two in 2016. While more change is coming, the workforce has hit a wall: A Gartner survey revealed that employees’ willingness to support enterprise change collapsed to just 43% in 2022, compared to 74% in 2016. Navigating the pandemic asked a lot of employees — and while they delivered, it came at a cost. Relentless sprinting means many employees are running on fumes. To create more sustainable change efforts, leaders must prioritize change initiatives, showing employees where to invest their energies. They also must manage change fatigue by building in periods of proactive rest, involving employees in change plans, and challenging managers to help build team resilience.close

Business transformation will remain at the forefront in 2023, as organizations continue to refine hybrid ways of working and respond to the urgent need to digitalize, while also contending with inflation, a continuing talent shortage, and supply-chain constraints. These circumstances, which require higher levels of productivity and performance, also mean a lot of change: In 2022, the average employee experienced 10 planned enterprise changes — such as a restructure to achieve efficiencies, a culture transformation to unlock new ways of working, or the replacement of a legacy tech system — up from two in 2016, according to Gartner research.

While more change is coming, the workforce has hit a wall: A Gartner survey revealed that employees’ willingness to support enterprise change collapsed to just 43% in 2022, compared to 74% in 2016.

We call the gap between the required change effort and employee change willingness the “transformation deficit.” Unless functional leaders steer swiftly and expertly, the transformation deficit will stymie organizations’ ambitions and undermine the employee experience, fueling decreased engagement and increased attrition.

The irony is that many of the goals of transformation — redesigning teams and structures, automating drudge activities, reengineering corporate culture — seek to ease burnout and fatigue and increase efficiency. Unfortunately, many leaders are approaching change management by applying short-term fixes, which is unsustainable.

The Big Pitfall: Moving Too Fast

The most common mistake when it comes to change management today is trying to build momentum for transformation by hitting the accelerator. A 2022 Gartner survey found that 75% of organizations are adopting a top-down approach to change, where leaders set the change strategy, create detailed implementation roadmaps, and deploy a high volume of change communications. Their goal is for workers to buy into the new path and for managers to lead the charge as champions and role models for their teams.

Insight Center Collection Managing ChangeStrategies for transformation at midsize companies and beyond. 

Unfortunately, navigating the pandemic asked a lot of employees — and while they delivered, it came at a cost. Relentless sprinting means many employees are running on fumes. Gartner research reveals the following:

  • Fifty-five percent of employees took a significant hit to their own health, their team relationships, and their work environment to sustain high performance through the disruption.
  • Only 36% of employees reported high trust in their organizations, with onsite workers reporting the least trust.
  • Half of employees reported struggling to find the information or people they needed to do their job on an ever-increasing volume of tasks.

Toward a More Sustainable Transformation

To get the most out of the change energy in your organization, Gartner analysis finds that leaders need to focus on two elements: prioritized change and managing fatigue.

Prioritized Change

Prioritized change means leaders show employees where to invest their energy by communicating their backlog of priorities, including change initiatives. Without such guidance, employees are likely to give 110% for each change, resulting in a blowout.

Many leadership teams already rank the most important organizational projects and initiatives, but that knowledge often isn’t shared beyond leadership team discussions. Communicating this more broadly can help teams more effectively manage their energy and efforts.

For example, IT leaders at The Cooperators, a Canadian insurance company, publish their priority progress list to all employees every month. The visibility helps employees understand the mechanics of the business, informing real and important judgments about where they should focus their attention.

Leaders must step back and consider the employee experience when determining the optimal speed for implementing change initiatives. For example, IT leaders at Sky Cable, a Filipino telecom, created guidelines for minimizing fatigue arising from a constant flow of technology changes. Their guidance includes “Design solutions to be visually like old solutions,” and “During periods of high change, minimize process changes that disrupt employee work.” They create a release calendar synched with the change efforts outside their own department. As a result, IT leaders can spot the best times to deploy new improvements.

Prioritized change can help leaders identify any changes that should be scrapped altogether. If a change is always at the bottom of your backlog and you continually delay it, it’s probably not critical.

Managing Change Fatigue

While organizational change management (OCM) is table stakes, fatigue management is a new change management muscle that executives must build. Three actions can prevent the risk of change fatigue: 

1. Build in periods of proactive rest to sustain change energy.

Remote and hybrid working has collapsed the distinction between work and life. In 2022, “workers [were] still effectively giving away the equivalent of more than a working day (8.5 hours) of unpaid overtime each week: less than in 2021 but still more than pre-pandemic,” according to payroll company ADP. But Gartner analysis shows more time working does not result in higher performance.

Rest does increase performance — if it’s proactive. Organizations must rethink how they approach rest, embedding it into the workflow to prevent burnout. Proactive rest should have three characteristics:

  • Available: There is a robust set of options for employees to use to rest and stay charged. These could include no-meeting days, defined working hours, planned “down time” within projects, or all-company days off.
  • Accessible: Employees are encouraged to take advantage of the available tools and resources and to rest guilt-free.
  • Appropriate: Those rest tools meet the individual needs of employees.

According to Gartner research, rest that is available, accessible, and appropriate contributes to a 26% increase in employee performance and a tenfold reduction in the number of employees experiencing burnout.

2. Move away from a top-down approach and open source your change plans.

Open-source change management embraces employees as active participants in change planning and implementation. It requires three shifts in thinking:

  • Involve employees in decision-making. This isn’t about allowing employees to vote on every change; it means finding ways to infuse the voice of those most impacted into your planning. Gartner research has found that this step alone can increase your change success by 15%. It makes change management a meritocracy, where you increase the odds that the best ideas and inputs are included in decision-making.
  • Shift implementation planning to employees.Leaders often don’t have enough visibility into the daily workflows of their teams to dictate a successful change approach. And leaving the workforce out of change implementation can increase resistance and failure. Gartner research has found that when employees own implementation planning, change success increases by 24%.
  • Engage in two-way conversations throughout the change process. Instead of focusing on how you’ll sell the change to employees, think of communications as a way to surface employee reactions. Holding regular, honest conversations about the change will allow employees to share their questions and opinions, which will drive understanding and make them feel like they’re part of the commitment to change. Gartner research has found that this step can increase change success by 32%.

3. Reimagine the role of managers in change.

Many managers are struggling to balance the needs of their leaders with the expectations of their employees. On top of this, managers are overwhelmed with change, too, making it difficult for them to effectively role model all the changes. Only 57% of managers report having enough capacity in their day-to-day work to support their teams through change.

Instead of asking managers to champion each and every change, leaders should instead challenge their managers to act as resilience builders. Managers who build their teams’ ability to self-navigate through change can increase employee sustainable performance by 29% and protect their own performance at the same time.

These managers know that they don’t always have the time or skills to demonstrate what change looks like. Instead, they ensure their teams learn by doing. They identify their employees’ strengths and motivations, and they connect them to colleagues with the relevant experience that they can best learn from.

Taken together, the strategies of prioritized change and fatigue management will advance the fuel economy of your 2023 transformation efforts, reducing drag and building momentum from employee energy.

  • Cian O Morain is a director of research in the Gartner HR practice. He focuses on organization design and change management. Cian leads research initiatives that explore how organizations can sustain employee engagement and productivity through change, and the design of workflows, structures and networks that enable employers to minimize work friction and maximize workforce agility and productivity.
  • Peter Aykens is a distinguished vice president and chief of research for the Gartner HR practice. He is responsible for setting the practice’s research agenda and strategy to address the mission critical priorities of HR leaders, including leadership, talent management, recruiting, diversity, equity and inclusion (DEI), total rewards, learning and development, and HR tech.

Article link: https://hbr.org/2023/05/employees-are-losing-patience-with-change-initiatives?

IBM’s generative AI strike force

Posted by timmreardon on 05/09/2023
Posted in: Uncategorized.

G tech giant IBM is launching a counterstrike in the industry’s suddenly-hot AI fight with today’s announcement of Watsonx, Axios’ Ryan Heath reports.

The big picture: Business-focused IBM claims its latest AI offering, set to launch in July, provides more accurate answers and takes a more responsible approach than rivals.

  • Microsoft, OpenAI and Google are rushing to lock down potentially massive new consumer markets for generative AI.
  • IBM is instead leaning into helping other companies implement their AI via a “data model factory” that offers IBM clients products tuned for their specialties in domains like language, code, chemistry and geospatial data.
  • Watsonx, in partnership with startup Hugging Face, incorporates open-source models; uses narrower, carefully culled datasets; and provides a “toolkit for governance.”

IBM’s top execs threw shadeon rivals at a Monday Watsonx preview.

  • Dario Gil, IBM’s head of research, said systems like ChatGPT are “not ready for primetime” thanks to “all sorts of random and made-up facts.”
  • CEO Arvind Krishna, who wasn’t at last week’s White House AI meeting, seemed pleased to be out of the firing line. He told Axios it gave him more time to court clients “who care a lot about accuracy.”

Between the lines: IBM knows about failed AI hype. It won headlines when its original Watson won Jeopardy in 2011, but after that the company’s revenue declined for 10 consecutive years — leaving Watsonx with a lot to prove.

Krishna addressed a range of AI topics, including…

  • America’s AI regulation debate: “The conversation has been lagging.” Krishna claimed credit for IBM helping to draft the EU’s upcoming rules, which focus on regulating high risk uses of AI.
  • New work category: “AI ops” covering activities like coding assistance and supply chain management.
  • Humans aren’t replaceable: “The systems still have years to go,” when it comes to “trying to replace a human being in their completeness.”
  • There’s no explainable AI: “Anybody who claims that a large AI model is explainable is not being completely truthful. They are not explainable in the sense of reasoning and logic.” But AI can transparently show its source data, and third parties can measure whether its answers show “bias with respect to gender, or age or ZIP code.”

What they’re saying: “AI may not replace managers, but managers that use AI will replace the managers that do not,” per Rob Thomas, IBM’s chief commercial officer.

  • Yes, but: Krishna’s already on-record saying IBM will pause hiring for back-office functions that AI can handle, and that includes managers.

The other side: IBM isn’t the only AI provider to claim the mantle of responsibility nor the only one targeting businesses, rather than consumers.

  • OpenAI has faced criticism for pushing GPT out to the world, but the public has embraced it. Meanwhile, even as it sets a fast industry pace, OpenAI — like many competitors — also touts its ethical scruples, pointing to its pre-deployment risk analysis and publicly grappling with the challenges of reducing harms.
  • Google took heat first for being too cautious in withholding the fruits of its AI research —then for an about-face that has seen it scrambling to ship generative-AI products.

Disclosure: Ryan’s spouse is an IBM employee.

Article link: https://www.axios.com/newsletters/axios-login-0856b2fe-6819-4fb1-a948-c867d6d9764d.html?chunk=0&utm_term=emshare#story0

The Indian Health Service Health Information Technology Modernization Program – IHS

Posted by timmreardon on 05/09/2023
Posted in: Uncategorized.

Welcome to the Indian Health Service (IHS) Health Information Technology (IT) Modernization Program website. The purpose of these pages is to provide timely information about the agency’s plans and progress to modernize our health IT systems. 

All IHS health care facilities, and many facilities operated by tribes and urban Indian organizations, currently use the Resource and Patient Management System (RPMS). RPMS is a comprehensive suite of applications that supports everything from patient registration to insurance billing, and includes the patient’s Electronic Health Record (EHR). 

RPMS has been in use, and continuously developed by IHS, for nearly 40 years. The technology underlying RPMS is outdated, and it is very challenging for each organizational unit to maintain its own RPMS database. Fortunately, health information technology has come a long way in the past 40 years, and the IHS is in the process of modernizing how these critical systems are acquired and managed in support of health care.

In recent years the IHS studied the best health IT options to deliver consistent, integrated, high-quality care. This analysis included consultation with tribes and conferring with urban Indian organizations. In 2021, the IHS published a decision [PDF] that announced the plan to replace RPMS with commercially available solutions, including a new EHR. That decision began the work now called the IHS Health IT Modernization Program. The Program, in collaboration with tribes and UIOs, will buy, build, deliver, and maintain a new enterprise EHR solution that will:

  • Minimize software development and technical support burdenboth at IHS Headquarters and for facilities across the country
  • Focus on system optimization and usability for end-users
  • Promote standardization and best practices nationwide
  • Liberate data so it is accessible across the enterprise by clinicians, patients, and partners alike to improve patient outcomes

We are just in the early stages of this exciting, transformational initiative. Please check the links on the left to learn more. You can also sign up to receive updates through the Health IT Modernization listserv at this link.

Article link: https://www.ihs.gov/hit/

NASA Still Lacks a Unified Definition of AI, Watchdog Finds – Nextgov

Posted by timmreardon on 05/08/2023
Posted in: Uncategorized.

By KIRSTEN ERRICKMAY 8, 2023 01:50 PM ET

The agency has made progress on artificial intelligence management, but still has work to do to meet governmentwide requirements.

NASA’s Inspector General found that while the agency has made progress in its artificial intelligence management, more work still needs to be done, according to a reportreleased on Wednesday. 

NASA has used AI across a wide variety of agency programs, such as storm prediction tools, the Mars Perseverance rover and elements of the Artemis missions, among other things. As a result of the large use cases for AI, the OIG noted that its regulation and management for cybersecurity risks and threats is critical. The watchdog looked at NASA’s progress to develop AI governance and standards to help assess its cybersecurity controls for AI data and technology. The OIG found that, despite some effort to manage the agency’s AI, NASA fell short by not having a single definition of AI. The report also noted that the agency’s AI classification and tracking are insufficient to fully address current and future federal requirements and AI cybersecurity concerns. 

Specifically, the shortcomings could impact NASA’s ability to manage its AI and adhere to several executive orders. Moreover, the lack of a central, standardized process could put the agency at an increased risk for cybersecurity threats.

The watchdog noted that NASA has made efforts to improve its AI management. For example, the agency established the NASA Framework for Ethical Use of Artificial Intelligence in April 2021, pulling from principles of leading AI organizations to guide ethical AI decisions, initial guidance for the agency, AI advice and questions for AI practitioners’ consideration.

In September 2022, NASA also developed the NASA Responsible AI Plan, which identified NASA’s Responsible AI officials and detailed how the agency would implement requirements of a 2020 executive order on trustworthy AI. Specifically, this includes capturing and reporting use case inventories, creating oversight of AI projects to ensure continuous monitoring efforts and engaging the AI community on NASA’s ethical AI standards and implementation.

However, the OIG found that, despite these planning documents, NASA has not adopted a standard AI definition. The agency is using three definitions in different overarching documents: the NASA Framework for the Ethical Use of Artificial Intelligence, NASA’s Responsible AI Plan—which uses the executive order definition—and NASA’s internal AI machine learning sharepoint collaboration website. 

“While all three definitions are similar, subtleties and nuances in each can alter whether a particular technology is properly considered AI,” the report stated.

The OIG added that NASA personnel reported AI based on their own understanding of it as opposed to relying on these definitions. 

This lack of a  standard definition means NASA does not have a way to “accurately classify and track AI or to identify AI expenditures within the agency’s financial system, making it difficult for the agency to meet federal requirements to monitor its use of AI,” according to the report.

The OIG also found that NASA’s AI is often managed as part of a larger project instead of on its own, which means it is not separately tracked. This has affected the agency’s response to the 2020 trustworthy AI executive order—which called for agencies to create an AI inventory—as well as its response to a 2019 executive order on maintaining U.S. AI leadership—which called for the gathering of an estimated annual budget for AI expenditures. In order to create the inventory and budget, NASA utilizes a multi-faceted data call to collect individual responses from AI users—something the OIG noted “takes significant time to compile, validate and vet, and runs the risk of clerical errors that could be significantly lessened using an automated process.”

Furthermore, while NASA officials believe the agency’s processes—such as monitoring requirements and making sure it safeguards AI systems from cyber threats—should be sufficient to address AI security concerns, previous OIG audits have revealed that NASA’s fragmented IT management puts it at an increased risk from cyber threats. According to the OIG, NASA also faces more challenges to implement potential future federal AI cybersecurity controls because of the lack of an AI-specific mechanism or way to appropriately categorize and classify AI within its record systems.

The OIG recommended NASA establish a standard definition for AI that “harmonizes” the three existing definitions; make sure the standard definition is used to identify, update and maintain NASA’s AI use case inventory; identify a classification system to help quickly apply federal cybersecurity control and monitoring practice requirements; and create a way to track budgets and expenditures for AI use case inventory.

NASA agreed or partially agreed with the watchdog’s recommendations and outlined how it would address them.

Article link: https://www.nextgov.com/emerging-tech/2023/05/nasa-still-lacks-unified-definition-ai-watchdog-finds/386072/

Want More Pentagon Innovation? Try This Experiment

Posted by timmreardon on 05/08/2023
Posted in: Uncategorized.
A former Defense Secretary and Air Force Secretary propose more budgetary flexibility and a new approach to program management.

MARK ESPER and DEBORAH LEE JAMES | 

MAY 8, 2023 05:00 AM ET

Speed, agility, and a tolerance for failure are required to enable greater defense innovation adoption for the United States to be successful in this era of great power competition. With Russia’s brazen invasion of Ukraine and China’s increasingly hostile actions in the Indo-Pacific, both bending the international rules-based order to fit their malign interests, confronting innovation challenges in defense will require a fresh approach. 

A key element of America’s strategy must be to counter our adversaries’ rapid acquisition and exploitation of leading technologies such as artificial intelligence, robotics, autonomy, and directed energy. To make a major step change when it comes to defense modernization, the Defense Department needs greater flexibility, tools, and a sustained political push from Capitol Hill to use them.

We propose an experiment in budget flexibility and a rethinking of Pentagon program management deep enough to transform the decades-old defense acquisition process. One approach would be to pick a handful of outstanding program executive officers, assign each a capability, and allow them to pursue it with a portfolio of efforts that, unlike today’s programs, can be modified with agility. Enable them to foster an industrial base that can furnish competition and secure supply chains and, if the new portfolio model proves successful, scale the approach across the services.

These are among the recommendations of the Commission for Defense Innovation Adoption, an Atlantic Council project that we are co-chairing. As former Pentagon leaders with experience working in Congress and industry, we assembled an array of former senior government leaders, high-tech industry executives, and national-security-focused investors who are passionate about tackling this critical challenge. We debated the issues and solutions and engaged more than 70 stakeholders from the White House, Congress, the Pentagon and the private sector. The immediate result is an interim report, unanimously supported by the Commissioners, that lays out ten recommendations to foster the rapid adoption of innovative technology by the Pentagon. 

Two of our leading recommendations focus on shifting acquisition management from individual program management to a capability-portfolio model while also giving defense leaders more budget flexibility.

Today, the Pentagon acquires capabilities through more than 1,000 acquisition programs, each with their own requirements, budgets, and contracts. This drives long timelines, stove-piped solutions, and gross inefficiencies. Yet the operational environment today requires the U.S. military to adopt new technologies far more rapidly while dumping ones whose utility has been surpassed. This “need for speed” will only grow as our weapons and equipment become more software-centric. 

We recommend that the Pentagon select five program executive officers, or PEOs, to operate a capability-portfolio model. This would more quickly move emerging technologies across the infamous Valley of Death. These PEOs could work with Congress to consolidate the smallest 20 percent of their budget accounts to enable greater funding flexibility within the portfolio to optimize investments. These PEOs could develop acquisition, contracting, and technical strategies for robust industry competition to produce capabilities that use common platforms and enterprise services to integrate solutions from many companies into a modular open-systems approach. 

Another key set of recommendations from the Commission focuses on providing the Pentagon with additional budget flexibility in the year of execution. The sixty-year-old planning, programming, budgeting, and execution, or PPBE, system brings longer timelines and tighter constraints. This cumbersome process requires developing budgets with two- to-three-year lead times across 1,700 budget accounts. But in an environment with rapidly changing operations, evolving threats, new technologies, and fresh risks and opportunities, it is impossible to effectively budget or quickly pivot to meet tomorrow’s challenges at two-year intervals. 

Congress is a vital partner for the Pentagon to enable greater speed and agility in acquisitions and technology development. The Commission recommends that the Pentagon and Congressional appropriators negotiate to consolidate hundreds of the smallest budget accounts, over 700 of which are under $20 million. Congress should change how it deals with Pentagon requests to move money to or from these smaller accounts. Currently, such moves must win approval from four committees. Instead, a request should start a countdown: if no lawmaker objects within 30 days, DOD should be able to move ahead. Lawmakers could also make it easier for the Pentagon to launch new efforts by raising the funding threshold that requires Congressional approval. Currently, such approval is required for new starts that will cost up to $20 million “for the entire effort.” Changing this language to “for the fiscal year” would allow PEOs to move ahead, while allowing lawmakers to veto efforts they disagree with. 

As the defense industry, government labs, and new commercial industry partners identify emerging technology solutions, we cannot afford to wait for funding to build out critical defense capabilities. We urge Congress and the Pentagon to act on these recommendations to accelerate innovation adoption by the Defense Department and put the U.S. in a far better position to face off against China and Russia in the years to come.

Dr. Mark Esper was the 27th Secretary of Defense. Deborah Lee James was the 23rd Secretary of the Air Force. They are board directors of the Atlantic Council and co-chairs of its Commission on Defense Innovation Adoption.

Article link: https://www.defenseone.com/ideas/2023/05/want-more-pentagon-innovation-try-experiment/386060/

DoD’s electronic health records system changing way military does business

Posted by timmreardon on 05/08/2023
Posted in: Uncategorized.

Holly Joers, the program executive officer for the Program Executive Office, Defense Healthcare Management Systems (PEO-DHMS), said DoD’s Cerner-based EHR, is now live at 75% of DoD’s clinics and hospitals, with 160,000 users and 6.1 million beneficiaries in the system so far.

Joers said DoD’s experience has been that the deployment process works much, much better once it’s moved beyond the first few sites. After that, a lot of lessons have been learned, and the institution can start to converge around change management and IT deployment practices that make sense for the whole enterprise.

“I can’t comment specifically on VA, but when I look at where they are now, I’m taken back to where DoD was in the 2017-2018 timeframe,” she said. “There were challenges with the network, and so we made rules about what infrastructure had to be in place before a go-live, and how long it needed to be stable before we went live. We looked at our governance and management process to hear different inputs. When you’re only dealing with four sites, everyone wants to make it work for what their workflow was before. So you really have to have the fortitude to look at making an enterprise standard, knowing that it might not match what they’re currently doing today. And we had to go through those growing pains.”

In the interview Joers also discussed:

  • How PEO-DHMS is avoiding technical debt as it continues to develop MHS Genesis.
  • The integration of anonymized data from Genesis with other “secondary” data sources – including, for instance, Census data – to make it easier to answer bigger public health questions.
  • Workflow and process standardization and modernization
  • How they are using feedback from users and patients to improve MHS Genesis

Article link: https://federalnewsnetwork.com/podcast/ask-the-cio-podcast/dods-electronic-health-records-system-changing-way-military-does-business/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
    • Agentic AI, explained – MIT Sloan 02/18/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (6)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...