healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Reorganizing government acquisition for the digital age – Government Executive

Posted by timmreardon on 11/30/2023
Posted in: Uncategorized.
The General Services Administration recently reorganized its Federal Acquisition Service, eliminating its regional structure, a move its commissioner Sonny Hashmi says is already yielding positive results.

NATALIE ALMS | 

NOVEMBER 29, 2023 02:10 PM ET

The self-described “procurement arm of the federal government” is undergoing a reorganization years in the making.

The Federal Acquisition Service, housed in the General Services Administration, supports over $87 billion in contracts across the government as part of its mission to help provide products and services to federal agencies.

In September, the agency announced that it would be implementing a reorganization in fiscal 2024 to simplify the experience for agencies using FAS by replacing a legacy, regional structure with something more centralized.

The old structure dates to the late 1990s, when much of the government operated regionally, given the need for paper-based and in-person interactions, FAS Commissioner Sonny Hashmi told Nextgov/FCW in an interview on the sidelines of ACT-IAC’s leadership conference in October.

As FAS digitized, the regional structure started creating an “artificial barrier, where a region that is maybe based and headquartered in Chicago is serving a customer who’s based in St. Louis using personnel that may be located in Fort Worth,” he explained. “So this whole concept around what is a region becomes increasingly difficult to explain.”

Now, FAS is organized by the customer agencies being served and their missions, meaning that “our customers now know that this is the one person I need to go to anytime I have any need for all of FAS,” said Hashmi. “Whether it’s fleet-related or technology-related, professional services or Technology Transformation Service, they have one person to call.”

The reorganization started three years ago with the question over the ethos of FAS, said Hashmi, and later became a team to lead a more customer-centric redesign for the service.

“It became very clear quickly that the way we were organized… was actually getting in the way of us serving our customers,” he said. “Often times, our customers have had to navigate our internal organization structure before they could get service, and that’s unacceptable.”

GSA will retain local offices and expertise, it says. As for the feds working in FAS, the changes will help them access new opportunities, since they won’t have to wait for an opportunity within their region anymore to move up, Hashmi argued, noting that in October alone over 40 new positions were opened. There is also the hope that the reorganization can help GSA hire acquisition professionals by widening the talent pool. 

Although GSA still has work to do “downstream,” said Hashmi — such as aligning contracts — the reorganization has also already helped FAS work with agencies to find common needs that might not otherwise be obvious to siloed organizations.

“Now we have dedicated teams serving our key customer segments, and over time, those teams will continue to learn more about their mission [and] develop specific solutions,” he said.

Article link: https://www.nextgov.com/acquisition/2023/11/reorganizing-government-acquisition-digital-age/392314/

A new chapter for Qiskit algorithms and applications

Posted by timmreardon on 11/29/2023
Posted in: Uncategorized.

In recent months, the Qiskit Ecosystem has undergone a series of changes, additions, migrations, and upgrades all driven by a common goal: to provide the community with a greater role in the development of algorithms and applications. To align with this goal, we have moved the applications repositories, relocated the algorithms in Qiskit into their own repository, and welcomed external partners as additional maintainers to the repositories. This blog post delves into the hows and whys of these changes, and introduces the new maintainers.

Algorithms as an independent package

If you have been following our release updates, you may already be familiar with this change. Since Qiskit 0.44.0, qiskit.algorithms has been migrated to a new standalone package, qiskit-algorithms. The new library can be found in the qiskit-community GitHuborganization and PyPi, and is the place to go for updated, primitive-based algorithm implementations.

This move is significant because it separates the circuit-building tools from the libraries built on top of them. This separation allows algorithm development to move at a different pace than that of the core library. While Qiskit itself aims to provide a stable foundation layer for quantum development, algorithms are still an evolving field of research, and will benefit from more flexible development cycles.

Community-oriented algorithms and applications

In the case of the Qiskit applications modules, the repository changes have been a bit more subtle. qiskit-nature, qiskit-machine-learning, qiskit-optimization, and qiskit-finance are now also part of the qiskit-community GitHub organization. The package installation path has remained unchanged, so the impact of this migration on end-users has been minimal. This move does however symbolize the strengthened community focus of the projects, which also involves the newly created qiskit-algorithms.

While these packages have always been open-source and welcomed external contributors, most feature development and maintenance efforts were sourced from within IBM Quantum. By joining forces with external partners, we enable the community to have a stronger impact on the direction of these libraries, bringing in new perspectives and areas of expertise. At the same time, we can focus more resources on improving the performance and stability of the core Qiskit package. The documentation of the applications can be found in their corresponding repositories as well as the Ecosystem page.

Welcoming new maintainers

The algorithms and applications libraries have onboarded new code-owners and maintainers from IBM Quantum partner institutions:

Algorithmiq (qiskit-nature): “Our mission is to revolutionise life sciences by exploiting the potential of quantum computing to solve currently inaccessible problems. Algorithmiq’s top quantum chemistry team and their knowledge of state-of-the-art quantum chemistry methods will, together with qiskit’s expert community, tackle some of the greatest quantum chemistry simulation challenges that lie ahead.”

STFC Hartree Centre (qiskit-machine-learning): “We help UK businesses and organisations of any size to explore and adopt supercomputing, data analytics, AI and emerging technologies for enhanced productivity, smarter innovation and economic growth. Our Qiskit work will be supported by the Hartree National Centre for Digital Innovation (HNCDI) — a collaboration with IBM Research that bridges the gap between academic research and the adoption of new technologies to solve industry challenge and transfer the skills needed to adopt digital solutions.”

Quantagonia (qiskit-optimization): “Quantagonia’s mission is to democratize quantum computing, making it accessible and manageable for businesses across sectors, enabling them to leverage this powerful technology for transformative growth and a competitive edge.”

These are domain experts in chemistry, machine learning and optimization, as well as active community contributors.

Cleaning up legacy dependencies

To facilitate the community’s greater role in the development of algorithms and applications, legacy dependencies in the applications have been cleaned up to simplify the code base and make the libraries more accessible to external contributors and new maintainers.

This means the newly released versions of the application modules: qiskit-finance 0.4, qiskit-machine-learning 0.7, qiskit-optimization 0.6 and qiskit-nature 0.7 no longer depend on, or support, the now deprecated modules from Qiskit, such as opflow and quantum instance. These modules will no longer be available in the upcoming Qiskit 1.0 release.

In addition to this, and in anticipation of the upcoming removal of qiskit.algorithms, which will also not be part of Qiskit 1.0, the applications modules have been updated to use only qiskit-algorithmsinstead.

Conclusion

In summary, the new algorithm and application libraries in the Qiskit Ecosystem have undergone a significant shift towards community-led development. Establishing an independent algorithms package, relocating applications, and simplifying legacy code were all steps designed to facilitate and enhance community engagement. As always, community contributions are most welcome, and — together with the new maintainers — we invite all of you to get involved via the GitHub repositories or the corresponding Qiskit Slack channels, such as #applications.

Article link: https://medium.com/qiskit/a-new-chapter-for-qiskit-algorithms-and-applications-5baff541e826

Bill Gates isn’t too scared about AI – MIT Technology Review

Posted by timmreardon on 11/29/2023
Posted in: Uncategorized.


“The best reason to believe that we can manage the risks is that we have done it before.”

By Will Douglas Heaven

July 11, 2023

Bill Gates has joined the chorus of big names in tech who have weighed in on the question of risk around artificial intelligence. The TL;DR? He’s not too worried, we’ve been here before. 

The optimism is refreshing after weeks of doomsaying—but it comes with few fresh ideas.

The billionaire business magnate and philanthropist made his case in a post on his personal blog GatesNotes today. “I want to acknowledge the concerns I hear and read most often, many of which I share, and explain how I think about them,” he writes.

According to Gates, AI is “the most transformative technology any of us will see in our lifetimes.” That puts it above the internet, smartphones, and personal computers, the technology he did more than most to bring into the world. (It also suggests that nothing else to rival it will be invented in the next few decades.)

Gates was one of dozens of high-profile figures to sign a statement put out by the San Francisco–based Center for AI Safety a few weeks ago, which reads, in full: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

But there’s no fearmongering in today’s blog post. In fact, existential risk doesn’t get a look in. Instead, Gates frames the debate as one pitting “longer-term” against “immediate” risk, and chooses to focus on “the risks that are already present, or soon will be.”

“Gates has been plucking on the same string for quite a while,” says David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute in the UK. Gates was one of several public figures who talked about the existential risk of AI a decade ago, when deep learning first took off, says Leslie: “He used to be more concerned about superintelligence way back when. It seems like that might have been watered down a bit.”

Gates doesn’t dismiss existential risk entirely. He wonders what may happen “when”—not if —“we develop an AI that can learn any subject or task,” often referred to as artificial general intelligence, or AGI.

He writes: “Whether we reach that point in a decade or a century, society will need to reckon with profound questions. What if a super AI establishes its own goals? What if they conflict with humanity’s? Should we even make a super AI at all? But thinking about these longer-term risks should not come at the expense of the more immediate ones.”

Gates has staked out a kind of middle ground between deep-learning pioneer Geoffrey Hinton, who quit Google and went public with his fears about AI in May, and others like Yann LeCun and Joelle Pineau at Meta AI (who think talk of existential risk is “preposterously ridiculous” and “unhinged”) or Meredith Whittaker at Signal (who thinks the fears shared by Hinton and others are “ghost stories”).

It’s interesting to ask what contribution Gates makes by weighing in now, says Leslie: “With everybody talking about it, we’re kind of saturated.”

Like Gates, Leslie doesn’t dismiss doomer scenarios outright. “Bad actors can take advantage of these technologies and cause catastrophic harms,” he says. “You don’t need to buy into superintelligence, apocalyptic robots, or AGI speculation to understand that.”

“But I agree that our immediate concerns should be in addressing the existing risks that derive from the rapid commercialization of generative AI,” says Leslie. “It serves a positive purpose to sort of zoom our lens in and say, ‘Okay, well, what are the immediate concerns?’”

In his post, Gates notes that AI is already a threat in many fundamental areas of society, from elections to education to employment. Of course, such concerns aren’t news. What Gates wants to tell us is that although these threats are serious, we’ve got this: “The best reason to believe that we can manage the risks is that we have done it before.”

In the 1970s and ’80s, calculators changed how students learned math, allowing them to focus on what Gates calls the “thinking skills behind arithmetic” rather than the basic arithmetic itself. He now sees apps like ChatGPT doing the same with other subjects.

In the 1980s and ’90s, word processing and spreadsheet applications changed office work—changes that were driven by Gates’s own company, Microsoft. 

Again, Gates looks back at how people adapted and claims that we can do it again. “Word processing applications didn’t do away with office work, but they changed it forever,” he writes. “The shift caused by AI will be a bumpy transition, but there is every reason to think we can reduce the disruption to people’s lives and livelihoods.”

Similarly with misinformation: we learned how to deal with spam, so we can do the same for deepfakes. “Eventually, most people learned to look twice at those emails,” Gates writes. “As the scams got more sophisticated, so did many of their targets. We’ll need to build the same muscle for deepfakes.”

Gates urges fast but cautious action to address all the harms on his list. The problem is that he doesn’t offer anything new. Many of his suggestions are tired; some are facile.

Like others in the last few weeks, Gates calls for a global body to regulate AI, similar to the International Atomic Energy Agency. He thinks this would be a good way to control the development of AI cyberweapons. But he does not say what those regulations should curtail or how they should be enforced.

He says that governments and businesses need to offer support such as retraining programs to make sure people do not get left behind in the job market. Teachers, he says, should also be supported in the transition to a world in which apps like ChatGPT are the norm. But Gates does not specify what this support would look like.

And he says that we need to get better at spotting deepfakes, or at least use tools that detect them for us. But the latest crop of toolscannot detect AI-generated images or text well enough to be useful. As generative AI improves, will the detectors keep up?

Gates is right that “a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.” But he often falls back on a conviction that AI will solve AI’s problems—a conviction that not everyone will share.

Yes, immediate risks should be prioritized. Yes, we have steered through (or bulldozed over) technological upheavals before and we could do it again. But how?

“One thing that’s clear from everything that has been written so far about the risks of AI—and a lot has been written—is that no one has all the answers,” Gates writes.

That’s still the case.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/07/11/1076094/bill-gates-isnt-scared-about-ai-existential-risk/amp/

In digital and AI transformations, start with the problem, not the technology – McKinsey

Posted by timmreardon on 11/28/2023
Posted in: Uncategorized.

For digital and AI transformations to succeed, companies need to understand the problems they want to solve and rewire their organizations for continuous innovation.

DOWNLOADS

Article (6 pages)

Digital and AI transformations are everywhere. Almost every company has done, is doing, or plans to do one. But how can you make the changes stick? In this episode of the Inside the Strategy Room podcast, McKinsey senior partner Eric Lamarre talks about the critical elements of what it takes to rewire an organization through making fundamental changes to talent, operating model, and technology and data capabilities. He is coauthor with Kate Smaje and Rodney Zemmel of the Wall Street Journalbestseller Rewired: The McKinsey guide to outcompeting in the age of digital and AI. This is an edited transcript of their conversation. For more discussions on the strategy issues that matter, follow the series on your preferred podcast platform.

Podcast link:

https://omny.fm/shows/inside-the-strategy-room/174-rewired-lessons-from-200-large-scale-digital-a/embed

Sean Brown: What areas do you find executives struggle with the most when trying to harness technology’s potential?

Eric Lamarre: How to do it at scale. Most business leaders have had a chance to taste what technology can do. They have done successful pilots and experiments, but these are not moving the needle on company performance. This is what Rewired gets at: How do you move from pilots to large-scale implementation? It’s not really a technology problem but a talent and a data problem—how do you organize to deliver this at scale? Of course, technology is involved, but the organizational component is where the big surgery has to happen.

Sean Brown: What should be the starting point for a digital and AI transformation?

Eric Lamarre: It should always start with the business problem you want to solve. When it starts that way, there is usually a good ending because the problem eventually ties back to serving customers better and delivering more value for the company. When business leaders say, “That’s the problem I want to solve with technology,” it becomes easier to develop the technology road map to solve that problem.

Sean Brown: There is probably no bigger subject in tech now than generative AI. Are you seeing companies almost inventing problems to solve with gen AI?

Eric Lamarre: Yes. The conversations right now make it feel like a technology in search of a problem. Maybe that’s natural because when we try gen AI, it seems like a magical experience. That takes your mind to “Where else could I apply this?” It’s good to come back to the fundamentals, though—what are the pain points in the company?—then search broadly for the set of technologies that will address them. Sometimes, that will be gen AI, but that doesn’t mean gen AI is the place to start.

For example, if you’re a consumer packaged goods company, you can use gen AI in many places, but good old revenue growth management is about advanced analytics on pricing, demand, and promotions. I don’t think gen AI will do a good job on that problem, and it’s one of the top business problems for consumer goods companies. Gen AI is a bit of a super technology, but it shouldn’t take us away from the problems businesses need to solve.

“The conversations [around gen AI] right now make it feel like a technology in search of a problem.”

Eric Lamarre

Sean Brown: One of the elements you highlight in the book as essential to long-term success in digital transformations is talent. What approach to talent do you recommend?

Eric Lamarre: In my experience, the conversation usually starts with, “All this AI stuff is wonderful, but we will never get the talent to do this.” In reality, traditional companies that are serious about digital transformations manage to get the right talent, but they have to be committed to a modern technology environment that will make it easy for employees to do their jobs. Talent wants to know that the company they join won’t cause their skills to atrophy because it’s using an old technology stack and software engineering methods. They will ask questions such as, “Tell me about your technology architecture. Tell me about how you manage data. Tell me about your software engineering method.” These questions will come very early in the conversation, and if it doesn’t sound like the company is serious, they will walk out because what they value most right now is their craft.

Sean Brown: Have you seen incumbent companies successfully develop their existing talent to manage new digital products or solutions?

Eric Lamarre: Yes. For example, a product manager might envision a solution. They will master the problem to be solved, develop the road map on how to solve it, and then guide their team to that solution. To understand the business problems, people need to have experience in the business. Usually, companies provide additional training and skills, but the most important skills are not those of a technologist but of a businessperson who understands enough about technology to imagine how it can solve the problem.

Sean Brown: What might cause a company to not capture the full value of a technology it implements?

Eric Lamarre: I’ll give you a concrete example. We developed technology for an airline to maximize the fill rate of its cargo space. It’s a highly profitable business for an airline, but maximizing that fill rate is difficult because you don’t know how many passengers will show up and how many will have suitcases, and those determine the extra space for cargo. It’s a beautiful problem for AI, and the technology we developed could say exactly how much extra cargo space there would be and what to charge for that space. But when we checked the planes, they weren’t flying with the cargo they were supposed to have. Why? The palletizing procedures at the airport were not quite right.

That is a lesson I have learned over and over: Whenever you develop a technology, there will be a secondary effect somewhere in the system that will prevent you from fully capturing the value. In this instance, the answer was to train operators at the airport on how to maximize pallet caseloads. That’s not a technology problem; that’s just a good old operational problem. Technology usually unveils a bottleneck in the process that needs to be solved to realize the technology’s value. Therein lies the importance of business leaders owning the end-to-end reimagination of the process because once they have deployed the technology, they need to play a key role in chasing down bottlenecks throughout the chain.

Sean Brown: In the book, you say, “You can’t outsource your way to success.” Can you elaborate?

Eric Lamarre: We find technology development is much more productive when done in-house. Why do I say that? If you’re a data engineer, a software engineer, or a machine-learning engineer working on a business problem, you can develop the right technology two to four times faster if you understand the context of that problem. And that doesn’t happen overnight. In-house technologists will also understand the context for the next incremental innovation on that problem and the one after that, and your whole innovation wheel starts to fly a lot faster.

Sean Brown: What role should external technologists or consultants play, then, in building an organization’s technological muscle?

Eric Lamarre: I face that question often, and my answer to clients is that they should not rely heavily on consultants. They should build that capability themselves. Developing that flywheel is not easy to start. How do you build a technologist bench? How do you show them the right way to work? How do you bring business to the dance? A third party that knows what they’re doing can accelerate this process, but you don’t want to completely outsource that capability. If you want it to become a source of competitive differentiation, you need to own it. You can’t outsource your way to competitive differentiation.

“Whenever you develop a technology, there will be a secondary effect somewhere in the system that will prevent you from fully capturing the value.”

Eric Lamarre

Sean Brown: Does it help digital transformations succeed if business leaders try to shift employee mindsets to see the organization as a digital or tech company?

Eric Lamarre: Some companies truly embrace that. In the book, we talk about DBS Bank, an organization that viewed itself as becoming a technology company. But not every company likes that analogy. They feel their core business is not technology but mining or consumer goods, and technology is a complement, not the core. However, if technology is going to play a role in driving competitive differentiation—better-served customer, lower unit cost—the company has no choice but to become good at software development. No company would debate whether they need to be good at finance. Well, if you want to be good at running a company infused with technology, you have to be good at software development.

Sean Brown: Should businesses think of technology less as a separate department such as HR or finance and more as a fundamental aspect of the organization?

Eric Lamarre: Yes. A central theme of our book is getting to a state I call “distributed digital innovation.” You might start with a handful of teams developing technology solutions. Their apps or models will show some value, but the rest of the company won’t be transformed. When you reach a rewired state, those few teams multiply a 100-fold and work in various parts of the organization—sales, supply chain, manufacturing, R&D. They serve the leaders of those different areas, developing technology to solve their problems. At that point, no one calls IT to develop a solution because each area has that capability. IT evolves into a distributed function with distributed technology capabilities.

Sean Brown: What role does the IT team take on in such a rewired organization?

Eric Lamarre: IT would become the bedrock enabling cybersecurity and distributing the tools and data needed for innovation. IT remains an important platform capability, but it’s no longer the sole engine of innovation. That belongs in the hands of the enterprise more broadly.

Sean Brown: How should the top executive team, including the CIO and the CFO, think about their roles in this new environment?

Eric Lamarre: Everybody around the CEO has a role to play in that transition. The CIO now needs to move to a model where they are enabling the rest of the organization to innovate securely, with access to the tools and data they need, and providing them with the development capacity that used to reside in IT. That flip is a major transformation of the IT function. For HR, the talent equation is massive. HR staff need to recruit from outside and upskill people in product management. In a large institution, tens of thousands of people may need to go through that HR transition. The head of HR also needs to figure out how to assess a data engineer on the basis of skills because skills become the currency, not how many people someone manages.

Then there is finance. How do you fund all of this? Before, when you had a big IT project, you debated it, built a business case, funded millions for the project, and mobilized a big team. Often, two years in, something would cause the project to go off the rails. That’s IT in the old days. IT in the new days gets funded differently, with many small teams. You can’t fund 500 different teams project by project; you have to move to something called persistent funding, where you are funding portfolios of small teams rather than individual projects. You continue the funding until solving the problem is no longer productive. Moving to persistent funding from project funding is a massive shift for finance.

“You can’t outsource your way to competitive differentiation.”

Eric Lamarre

Now, think about the people who head control functions: risk management, compliance, and regulation. Now, they have 500 little teams innovating, so they are underwriting new risk. The role of the control functions needs to move upstream to guide that development. They need to ask before development has even begun, “Are there any risks these teams are undertaking that we should be monitoring?” If a team is working with customer data, there are data privacy risks, and you don’t want to tell the team six months into the project, “You didn’t handle data privacy, so we can’t use the work you did up to now.” These functions also need processes to monitor that the risks have been addressed.

Long story short, when you move to a distributed innovation model, everybody in the C-suite has a new job. Everybody has to drive a transformation of their area for the whole system to work. It becomes what we call in the book “the ultimate corporate sport,” and everybody’s got to play.

Sean Brown: Many executives fear the complexity of IT—you tug on one wire, and 14 things fall apart. How would you ease those concerns?

Eric Lamarre: It’s the main reason why we wrote this book. The technology field has become very complex. Take gen AI—executives are wondering, “Does it mean old AI is dead, and now I just focus on gen AI? By the way, I keep hearing about data engineers—what do they do? And data architecture, I don’t understand any of that. How does that work? And I’ve been hearing about agile for ten years. Is that still relevant?” I could go on and on. The lack of a common understanding around the executive table makes progress very difficult because it’s become a world of buzzwords.

To some extent, we wrote this book to “de-complexify” the space and focus on what matters to getting value from new technology. Typically, when an executive starts on this journey, I counsel them, “Go slow to go fast.” Take your team on a shared learning journey. Invest 10 or 15 hours to establish a common language and base of technology understanding, clarifying all the questions I just mentioned and others. Second, visit companies that are further ahead in such transformations. Get inspired by what their leaders achieved and build your own confidence that you can do it, too. After that investment—and it’s not such a big investment—the alignment is there, and the top team can start to play their respective roles in leading the technology transformation.

ABOUT THE AUTHOR(S)

Eric Lamarre is a senior partner in McKinsey’s Boston office. Sean Brown is the global director of communications for McKinsey’s Strategy and Corporate Finance Practice and is based in Boston.

Article link: https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/in-digital-and-ai-transformations-start-with-the-problem-not-the-technology

Eric Schmidt: This is how AI will transform the way science gets done – MIT Technology Review

Posted by timmreardon on 11/28/2023
Posted in: Uncategorized.

Science is about to become much more exciting—and that will affect us all, argues Google’s former CEO.

By Eric Schmidt

July 5, 2023

It’s yet another summer of extreme weather, with unprecedented heat waves, wildfires, and floods battering countries around the world. In response to the challenge of accurately predicting such extremes, semiconductor giant Nvidia is building an AI-powered “digital twin” for the entire planet. 

This digital twin, called Earth-2, will use predictions from FourCastNet, an AI model that uses tens of terabytes of Earth system data and can predict the next two weeks of weather tens of thousands of times faster and more accurately than current forecasting methods.

Usual weather prediction systems have the capacity to generate around 50 predictions for the week ahead. FourCastNet can instead predict thousands of possibilities, accurately capturing the risk of rare but deadly disasters and thereby giving vulnerable populations valuable time to prepare and evacuate. 

The hoped-for revolution in climate modeling is just the beginning. With the advent of AI, science is about to become much more exciting—and in some ways unrecognizable. The reverberations of this shift will be felt far outside the lab; they will affect us all. 

If we play our cards right, with sensible regulation and proper support for innovative uses of AI to address science’s most pressing issues, AI can rewrite the scientific process. We can build a future where AI-powered tools will both save us from mindless and time-consuming labor and also lead us to creative inventions and discoveries, encouraging breakthroughs that would otherwise take decades.

AI in recent months has become almost synonymous with large language models, or LLMs, but in science there are a multitude of different model architectures that may have even bigger impacts. In the past decade, most progress in science has come through smaller, “classical” models focused on specific questions. These models have already brought about profound advances. More recently, larger deep-learning models that are beginning to incorporate cross-domain knowledge and generative AI have expanded what is possible.

Scientists at McMaster and MIT, for example, used an AI model to identify an antibiotic to combat a pathogen that the World Health Organization labeled one of the world’s most dangerous antibiotic-resistant bacteria for hospital patients. A Google DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean-energy revolution. Within health care, the US Food and Drug Administration has already cleared 523 devices that use AI—75% of them for use in radiology.

Reimagining science

At its core, the scientific process we all learned in elementary school will remain the same: conduct background research, identify a hypothesis, test it through experimentation, analyze the collected data, and reach a conclusion. But AI has the potential to revolutionize how each of these components looks in the future. 

Artificial intelligence is already transforming how some scientists conduct literature reviews. Tools like PaperQA and Elicit harness LLMs to scan databases of articles and produce succinct and accurate summaries of the existing literature—citations included.

Once the literature review is complete, scientists form a hypothesis to be tested. LLMs at their core work by predicting the next word in a sentence, building up to entire sentences and paragraphs. This technique makes LLMs uniquely suited to scaled problems intrinsic to science’s hierarchical structure and could enable them to predict the next big discovery in physics or biology.

AI can also spread the search net for hypotheses wider and narrow the net more quickly. As a result, AI tools can help formulate stronger hypotheses, such as models that spit out more promising candidates for new drugs. We’re already seeing simulations running multiple orders of magnitude faster than just a few years ago, allowing scientists to try more design options in simulation before carrying out real-world experiments. 

Scientists at Caltech, for example, used an AI fluid simulation model to automatically design a better catheter that prevents bacteria from swimming upstream and causing infections. This kind of ability will fundamentally shift the incremental process of scientific discovery, allowing researchers to design for the optimal solution from the outset rather than progress through a long line of progressively better designs, as we saw in years of innovation on filaments in lightbulb design.

Moving on to the experimentation step, AI will be able to conduct experiments faster, cheaper, and at greater scale. For example, we can build AI-powered machines with hundreds of micropipettes running day and night to create samples at a rate no human could match. Instead of limiting themselves to just six experiments, scientists can use AI tools to run a thousand.

Scientists who are worried about their next grant, publication, or tenure process will no longer be bound to safe experiments with the highest odds of success; they will be free to pursue bolder and more interdisciplinary hypotheses. When evaluating new molecules, for example, researchers tend to stick to candidates similar in structure to those we already know, but AI models do not have to have the same biases and constraints. 

Eventually, much of science will be conducted at “self-driving labs”—automated robotic platforms combined with artificial intelligence. Here, we can bring AI prowess from the digital realm into the physical world. Such self-driving labs are already emerging at companies like Emerald Cloud Laband Artificial and even at Argonne National Laboratory. 

Finally, at the stage of analysis and conclusion, self-driving labs will move beyond automation and, informed by experimental results they produced, use LLMs to interpret the results and recommend the next experiment to run. Then, as partners in the research process, the AI lab assistant could order supplies to replace those used in earlier experiments and set up and run the next recommended experiments overnight, with results ready to deliver in the morning—all while the experimenter is home sleeping.

Possibilities and limitations

Young researchers might be shifting nervously in their seats at the prospect. Luckily, the new jobs that emerge from this revolution are likely to be more creative and less mindless than most current lab work. 

AI tools can lower the barrier to entry for new scientists and open up opportunities to those traditionally excluded from the field. With LLMs able to assist in building code, STEM students will no longer have to master obscure coding languages, opening the doors of the ivory tower to new, nontraditional talent and making it easier for scientists to engage with fields beyond their own. Soon, specifically trained LLMs might move beyond offering first drafts of written work like grant proposals and might be developed to offer “peer” reviews of new papers alongside human reviewers.

AI tools have incredible potential, but we must recognize where the human touch is still important and avoid running before we can walk. For example, successfully melding AI and robotics through self-driving labs will not be easy. There is a lot of tacit knowledge that scientists learn in labs that is difficult to pass to AI-powered robotics. Similarly, we should be cognizant of the limitations—and even hallucinations—of current LLMs before we offload much of our paperwork, research, and analysis to them. 

Companies like OpenAI and DeepMind are still leading the way in new breakthroughs, models, and research papers, but the current dominance of industry won’t last forever. DeepMind has so far excelled by focusing on well-defined problems with clear objectives and metrics. One of its most famous successes came at the Critical Assessment of Structure Prediction, a biennial competition where research teams predict a protein’s exact shape from the order of its amino acids. 

From 2006 to 2016, the average score in the hardest category ranged from around 30 to 40 on CASP’s scale of 1 to 100. Suddenly, in 2018, DeepMind’s AlphaFold model scored a whopping 58. An updated version called AlphaFold2 scored 87 two years later, leaving its human competitors even further in the dust.

Thanks to open-source resources, we’re beginning to see a pattern where industry hits certain benchmarks and then academia steps in to refine the model. After DeepMind’s release of AlphaFold, Minkyung Baek and David Baker at the University of Washington released RoseTTAFold, which uses DeepMind’s framework to predict the structures of protein complexesinstead of only the single protein structures that AlphaFold could originally handle. More important, academics are more shielded from the competitive pressures of the market, so they can venture beyond the well-defined problems and measurable successes that attract DeepMind. 

In addition to reaching new heights, AI can help verify what we already know by addressing science’s replicability crisis. Around 70% of scientists report having been unable to reproduce another scientist’s experiment—a disheartening figure. As AI lowers the cost and effort of running experiments, it will in some cases be easier to replicate results or conclude that they can’t be replicated, contributing to a greater trust in science.

The key to replicability and trust is transparency. In an ideal world, everything in science would be open access, from articles without paywalls to open-source data, code, and models. Sadly, with the dangers that such models are able to unleash, it isn’t always realistic to make all models open source. In many cases, the risks of being completely transparent outweigh the benefits of trust and equity. Nevertheless, to the extent that we can be transparent with models—especially classical AI models with more limited uses—we should be. 

The importance of regulation

With all these areas, it’s essential to remember the inherent limitations and risks of artificial intelligence. AI is such a powerful tool because it allows humans to accomplish more with less: less time, less education, less equipment. But these capabilities make it a dangerous weapon in the wrong hands. Andrew White, a professor at the University of Rochester, was contracted by OpenAI to participate in a “red team” that could expose GPT-4’s risks before it was released. Using the language model and giving it access to tools, White found it could propose dangerous compounds and even order them from a chemical supplier. To test the process, he had a (safe) test compound shipped to his house the next week. OpenAI says it used his findings to tweak GPT-4 before it was released.

Even humans with entirely good intentions can still prompt AIs to produce bad outcomes. We should worry less about creating the Terminator and, as computer scientist Stuart Russell has put it, more about becoming King Midas, who wished for everything he touched to turn to gold and thereby accidentally killed his daughter with a hug.

We have no mechanism to prompt an AI to change its goal, even when it reacts to its goal in a way we don’t anticipate. One oft-cited hypothetical asks you to imagine telling an AI to produce as many paper clips as possible. Determined to accomplish its goal, the model hijacks the electrical grid and kills any human who tries to stop it as the paper clips keep piling up. The world is left in shambles. The AI pats itself on the back; it has done its job. (In a wink to this famous thought experiment, many OpenAI employees carry around branded paper clips.)

OpenAI has managed to implement an impressive array of safeguards, but these will only remain in place as long as GPT-4 is housed on OpenAI’s servers. The day will likely soon come when someone manages to copy the model and house it on their own servers. Such frontier models need to be protected to prevent thieves from removing the AI safety guardrails so carefully added by their original developers.

To address both intentional and unintentional bad uses of AI, we need smart, well-informed regulation—on both tech giants and open-source models—that doesn’t keep us from using AI in ways that can be beneficial to science. Although tech companies have made strides in AI safety, government regulators are currently woefully underprepared to enact proper laws and should take greater steps to educate themselves on the latest developments.

Beyond regulation, governments—along with philanthropy—can support scientific projects with a high social return but little financial return or academic incentive. Several areas are especially urgent, including climate change, biosecurity, and pandemic preparedness. It is in these areas where we most need the speed and scale that AI simulations and self-driving labs offer. 

Government can also help develop large, high-quality data sets such as those on which AlphaFold relied—insofar as safety concerns allow. Open data sets are public goods: they benefit many researchers, but researchers have little incentive to create them themselves. Government and philanthropic organizations can work with universities and companies to pinpoint seminal challenges in science that would benefit from access to powerful databases. 

Chemistry, for example, has one language that unites the field, which would seem to lend itself to easy analysis by AI models. But no one has properly aggregated data on molecular properties stored across dozens of databases, which keeps us from accessing insights into the field that would be within reach of AI models if we had a single source. Biology, meanwhile, lacks the known and calculable data that underlies physics or chemistry, with subfields like intrinsically disordered proteins that are still mysterious to us. It will therefore require a more concerted effort to understand—and even record—the data for an aggregated database.

The road ahead to broad AI adoption in the sciences is long, with a lot that we must get right, from building the right databases to implementing the right regulations, mitigating biases in AI algorithms to ensuring equal access to computing resources across borders. 

Nevertheless, this is a profoundly optimistic moment. Previous paradigm shifts in science, like the emergence of the scientific process or big data, have been inwardly focused—making science more precise, accurate, and methodical. AI, meanwhile, is expansive, allowing us to combine information in novel ways and bring creativity and progress in the sciences to new heights.

Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, applying science and technology, and bringing people together across fields.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science/amp/

Army releases first doctrinal publication focused on information

Posted by timmreardon on 11/28/2023
Posted in: Uncategorized.

The Army published its highly anticipated doctrine focusing on information. 

BYMARK POMERLEAU. NOVEMBER 27, 2023

The Army unveiled its first doctrinal publication focused solely on the information dimension of military action.

The highly anticipated document, released publicly on Monday, was many years in the works. It aims to provide a framework for creating and exploiting information advantages during operations as well as at home station, according to officials.

The U.S. military as a whole has begun a shift, recognizing the importance information plays not only in conflict but in everyday life. Adversaries have sought to exploit the information realm on a daily basis — within the so-called gray zone, below the threshold of armed conflict — as a means of undermining U.S. and allied interests without having to confront them in direct military conflict.

Army Doctrinal Publication 3-13, Information “represents an evolution in how Army forces think about the military uses of data and information, emphasizing that everything Army forces do, to include the information and images it creates, generates effects that contribute to or hinder achieving objectives,” Lt. Gen. Milford Beagle, commander of the Combined Arms Center, wrote in the foreword. “As such, creating and exploiting information advantages is the business of all commanders, leaders, and Soldiers.”

Nestled within the Army’s new operating concept of multi-domain operations — which envisions continuous synchronization of capabilities across all five domains of battle including land, air, sea, space and cyberspace — ADP 3-13 describes a combined arms approach to creating and exploiting information objectives.

“Despite friendly advances in information technology and networks, threats (adversaries and enemies) can degrade joint force information advantages held in the past. Within degraded environments, Army leaders at all echelons must have the ability to develop understanding, make decisions, communicate, and act decisively. To achieve this, Army forces fight for, defend, and fight with information as part of a continuous struggle to gain and exploit advantages above and below the threshold of armed conflict,” the document states.

ADP 3-13 sets forth five information-related activities:

  • “Enable,” with the goal of enhancing military activity and command and control
  • “Protect,” with the aim of denying enemy efforts at exploiting friendly forces and working to secure and preserve U.S. and partner data, information and networks
  • “Inform,” with the intent of providing perceptions of military operations and activities among various audiences to include the Army, domestic audiences and international audiences for the purpose of maintaining trust and confidence
  • “Influence,” with the objective of affecting the thinking and activities of adversaries
  • “Attack,” as a way of hindering adversaries’ ability to use data, information, communications or other systems through offensive action within the electromagnetic spectrum, space and cyberspace

The Army has sought to avoid the term “information warfare,”preferring to use the phrase “information advantage.” The service had been charting down a path of information warfare since the 2018-2019 time period, but bureaucratic bottlenecks and competing desires led the Army to adopt “information advantage” as part of its doctrinal lexicon.

Since those discussions, the Army has continued to evolve and further separate itself from the information warfare nomenclature, despite using it to describe adversary activity in its field manual for operations as well as the new doctrine.

Additionally, the Army has distanced itself from other Defense Department components, being the only service not to adopt information as a joint warfighting function. Army officials view information as one dimension of a singular operating environment.

Last year, the Joint Staff published a revision to its information doctrine, Joint Publication 3-04, Information in Joint Operations. Following suit, the Marine Corps revised its approach and doctrine on information to be more in line with joint doctrine and lexicon. The Air Force also published an information warfare strategy and implementation plan last year, opting to place five functional areas under the IW umbrella.

Although the Army has a unique approach, its new document includes a chapter focused on integration with joint and multinational partners.

Article link: https://defensescoop.com/2023/11/27/army-releases-first-doctrinal-publication-focused-on-information/?

Defense Board Eyes Improving Acquisition, Culture for Innovation

Posted by timmreardon on 11/28/2023
Posted in: Uncategorized.

Two Defense Innovation Board studies aim to shed light on overcoming some of DOD’s biggest challenges to modernization.

Anastasia Obis Fri, 11/24/2023 – 10:30

Two Defense Innovation Board studies want to identify areas to improve acquisition and data readiness at the Defense Department.

Streamlining acquisitions, rapid fielding of technology and promotion of competition are some of the primary challenges and barriers for modernization, according to officials at the Nov. 14 board meeting.

“The areas we’re looking at are everything from security processes, whether that is individual security clearances or secure facilities and their broad use, to the authority to operate in a secure ecosystem, to personnel practices, to the acquisition mechanisms themselves, everything ranging from the assets that we have or the data we need to receive,” said CACI International Director Sue Gordon, who leads one of the studies, “Lowering Barriers to Innovation.”

Leaders point to the agency’s culture as a factor impeding innovation.

“We promote based on compliance. You show that you’re good at what you do. And we expect you to comply with the requirements that we put in place,” said Marine Corps Chief Warrant Officer 3 Matt Pine at the meeting. “It’s hard to find that talent and that desire to change things when we’re focused on complying.”

“We all know that the defense innovation ecosystem is fraught with bureaucratic overhead, inefficient processes and self-imposed barriers that slow us down. So, with this study, we’re tackling the barriers that are easiest to tackle and have the most impact. And by easiest, what I mean is that they are implementable, that there is no statute or reason other than our own choice,” Gordon said.

Pine called the agency’s acquisition process its “greatest” impediment.

“As we know, it takes three to five years to vet and approve anything to get from [research and development] into the hands of the warfighter,” Pine said, adding that taxpayers and the agency are suffering due to monopolies created over the past 30 years within the Defense Industrial Base. “If I’m the sole source of supply, I don’t have to be good at it. So, we haven’t created a lot of competition.”

One of DOD’s biggest priorities is data centricity, especially as it works toward its Combined Joint All-Domain Command and Control (CJADC2) concept of decision advantage across all military domains.

“Without data readiness, you can’t have a ready military force. So we need to train our soldiers on data like we train them on the platforms and the weapons they use every day. The soldiers need to be comfortable with using data at the speed of mission to achieve that operational advantage,” Thomas Sasala, deputy director of the Army’s Enterprise Management Office, said during the meeting.

The second study, “Building a DOD Data Economy,” aims to identify gaps in the department’s data economy and leverage industry best practices, including principles, frameworks and metrics.  

“There’s an interesting bias against using real data. Instead, we oddly rely on our trusty PowerPoint presentations, which may or may not have a solid basis in data or facts for decision-making,” Sasala added.

Results from the studies are expected during the board’s Jan. 26 meeting.

Article link: https://governmentciomedia.com/defense-board-eyes-improving-acquisition-culture-innovation

VA launches $1 million AI tech competition to reduce health care worker burnout

Posted by timmreardon on 11/24/2023
Posted in: Uncategorized.

FOR IMMEDIATE RELEASE

October 31, 2023 9:30 am

Today, the Department of Veterans Affairs launched an Artificial Intelligence Tech Sprint encouraging innovators across America to create AI-enabled tools to reduce burnout among health care workers. The winning solutions will help clinicians take notes during medical appointments and/or integrate patients’ medical records, with the winning teams receiving $1 million in total prizes.

Reducing burnout among health care workers is a top priority for VA, especially at a time when VA is delivering more care and more benefits to more Veterans than ever before. This effort is a part of President Biden’s new executive order on Safe, Secure, and Trustworthy Artificial Intelligence and VA’s efforts to use trustworthy AI solutions to improve health care and benefits for Veterans, their families, caregivers, and survivors.

VA’s health care professionals provide life-saving and life-changing care for Veterans every day. Veterans who are enrolled in VA health care are proven to have better health outcomes than non-enrolled Veterans, and VA hospitals have dramatically outperformed non-VA hospitals in overall quality ratings and patient satisfaction ratings. Additionally, many of VA’s health care workers risked their lives to deliver for Veteransduring the pandemic.

“AI solutions can help us reduce the time that clinicians spend on non-clinical work, which will get our teams doing more of what they love most: caring for Veterans,” said Under Secretary for Health Shereef Elnahal, M.D. “This effort will reduce burnout among our clinicians and improve Veteran health care at the same time.”

This sprint is just one aspect of VA’s comprehensive efforts to reduce burnout among health care workers. Last year, VA launched the Reduce Employee Burnout and Optimize Organizational Thriving(REBOOT) initiative to address major factors contributing to burnout and to promote wellbeing among employees. Additionally, VA is hiring employees at record rates to ensure that health care workers have the support they need. These efforts have led to a 20% decrease in turnover rate among Veterans Health Administration employees from 2022 to 2023.

Proposals for the AI Tech Sprint should seek to address one or more of the two focus areas: speech-to-text solutions for use in medical appointments, and document processing to reduce the time needed to integrate non-VA medical records into patients’ VA record.

Innovators interested in applying for the AI Tech Sprint can do so on the AI Tech Sprint website.

Article link: https://news.va.gov/press-room/va-artificial-intelligence-tech-sprint/

Successful EHR Implementation Hinges on Change Management – GovCIO

Posted by timmreardon on 11/21/2023
Posted in: Uncategorized.

Federal leaders working on the VA, DOD electronic health records reflect on managing through change.

Anastasia Obis

Fri, 09/22/2023 – 14:04

Key change management principles in the federal effort to modernize electronic health records includes effective communication and maintaining integration with the legacy system to ensure seamless rollout and implementation.

For years, the Office of Marine and Aviation Operations at the National Oceanic and Atmospheric Administration (NOAA) had several medical programs to support its commissioned officers, but there was not a NOAA-owned EHR. That changed this summer after NOAA joined the joint EHR with the Defense Department and Coast Guard called MHS Genesis.

“We are able to now freely exchange information between our branches. … Additionally, just having that connection to the military treatment facilities was huge,” Cmdr. Scott Miller, director of the Office of Health Services at NOAA, said at the Sept. 21 Health IT Summit in Bethesda, Maryland. Miller explained the service previously relied on information that was provided from those applying to be officers. “Through health information exchange, we’re able to see much more complete pictures.”

Critical in the EHR rollout effort has been effective communication — one of the leading practices among change management principles.

“Once we got to landing on MHS Genesis and in preparing for an implementation, going back to the people component of it, communicating out information freely, frequently, to all the users, all the patients that were going to be affected, all the executive leadership just to make sure everybody was on the same level as far as what to expect,” said Miller. 

John Windom, deputy director at the Federal Electronic Health Record Modernization (FEHRM) office, which is overseeing implementation of DOD’s record along with the Department of Veterans Affairs, emphasized the importance of embracing change.

“My boss Bill Tinston often says, ‘People hate the system until you tell him you’re going to change it. Then everybody likes it again.’ We wrestle with that, and we’ll continue to do so,” Windom said. 

“But homing in on the people, understanding the way people learn. Everyone doesn’t learn the same way. And so, you’ve got to be resilient in many cases. You’ve got to be steadfast in your resolve because sometimes the easy thing is just to give up and go back to what you’ve been doing,” he added.

Of course, the technology component is important, particularly for VA, which will be moving from its legacy VistA system to the new Oracle-Cerner platform. Red Hat Chief Architect Ben Cushing noted the importance of a bidirectional data plane and the need to consider modernizing the existing legacy EHR.

“The existing EHR, generally, is not going to go away for quite a while. … VistA is probably going to be around for a decade or more, if not forever. … When you’re actually doing the change management or actually doing the migration, you’re going to have two EHRs or more running at the same time. And that’s a significant patient safety problem,” Cushing said. “In this case, you really need to have a bidirectional data plane so that the records themselves are available between the different systems, and that reality has to exist for quite a while. And it’s a challenge. It’s an interoperability challenge.”

Slow migration, rather than introducing major changes in one fell swoop, will also contribute to a successful EHR implementation. 

“Gradual change is something that we as humans are much more adaptive to. So while you have that legacy EHR, you can slowly start to do that migration over time, and people feel much more comfortable with that, as opposed to like one day lifting the thing they know how to use and dropping something else in place right in front of them and then watch them panic because they can’t do their day job,” said Cushing. 

As there are no two systems that are exactly alike, it is vital to allow the ecosystem to grow. 

“Adding on other tools that do things like care coordination or … advanced analytics and AI — those are things that are not going to generally come from your EHR. Those are part of the health care ecosystem, and you want to bolt them on to create a more complex and sophisticated health system,” Cushing added.

Article link: Successful EHR Implementation Hinges on Change Management

AI Ethics at Unilever: From Policy to Process – MIT Sloan Management Review

Posted by timmreardon on 11/21/2023
Posted in: Uncategorized.

Thomas H. Davenport and Randy BeanNovember 15, 2023Reading Time: 9 min

Many large companies today — most surveys suggest over 70% globally — have determined that artificial intelligence is important to their future and are building AI applications in various parts of their businesses. Most also realize that AI has an ethical dimension and that they need to ensure that the AI systems they build or implement are transparent, unbiased, and fair. 

Thus far, many companies pursuing ethical AI are still in the early stages of addressing it. They might have exhorted their employees to take an ethical approach to AI development and use or drafted a preliminary set of AI governance policies. Most have not done even that; in one recent survey, 73% of U.S. senior leaders said they believe that ethical AI guidelines are important, yet only 6% had developed them.

We see five stages in the AI ethics process: evangelism, when representatives of the company speak about the importance of AI ethics; development of policies, where the company deliberates on and then approves corporate policies around ethical approaches to AI; recording, where the company collects data on each AI use case or application (using approaches such as model cards); review, where the company performs a systematic analysis of each use case (or outsources it to a partner company) to determine whether the case meets the company’s criteria for AI ethics; and action, where the company either accepts the use case as it is, sends it back to the proposing owner for revision, or rejects it.

It is only in the higher-level stages — review and action — that a company can actually determine whether its AI applications meet the transparency, bias, and fairness standards that it has established. For it to put those stages in place, it has to have a substantial number of AI projects, processes, and systems for gathering information, along with governance structures for making decisions about specific applications. Many companies do not yet have those preconditions in place, but they will be necessary as companies exhibit greater AI maturity and emphasis. 

Early Policies at Unilever

Unilever, the British consumer packaged goods company whose brands include Dove, Seventh Generation, and Ben & Jerry’s, has long had a focus on corporate social responsibility and environmental sustainability. More recently, the company has embraced AI as a means of dramatically improving operations and decision-making across its global footprint. Unilever’s Enterprise Data Executive, a governance committee, recognized that the company could build on its robust privacy, security, and governance controls by embedding the responsible and ethical use of AI into the company’s data strategies. The goal was to take advantage of AI-driven digital innovation to both maximize the company’s capabilities and promote a fairer and more equitable society. A multifunctional team was created and tasked with exploring what this meant in practice and building an action program to operationalize the objective.

Unilever has now implemented all of the five stages described above, but, looking back, its first step was to create a set of policies. One policy, for example, specified that any decision that would have a significant life impact on an individual should not be fully automated and should instead ultimately be made by a human. Other AI-specific principles that were adopted include the edicts “We will never blame the system; there must be a Unilever owner accountable” and “We will use our best efforts to systematically monitor models and the performance of our AI to ensure that it maintains its efficacy.” 

Committee members realized quickly that creating broad policies alone would not be sufficient to ensure the responsible development of AI. To build confidence in the adoption of AI and truly unlock its full potential, they needed to develop a strong ecosystem of tools, services, and people resources to ensure that AI systems would work as they were supposed to.

One Unilever policy states that any decision that has a significant life impact on an individual should not be fully automated.

Committee members also knew that many of the AI and analytics systems at Unilever were being developed in collaboration with outside software and services vendors. The company’s advertising agencies, for example, often employed programmatic buying software that used AI to decide what digital ads to place on web and mobile sites. The team concluded that its approach to AI ethics needed to include attention to externally sourced capabilities.

Developing a Robust AI Assurance Process

Early on in Unilever’s use of AI, the company’s data and AI leaders noticed that some of the issues with the technology didn’t involve ethics at all — they involved systems that were ineffective at the tasks they were intended to accomplish. Giles Pavey, Unilever’s global director of data science, who had primary responsibility for AI ethics, knew that this was an important component of an AI use case. “A system for forecasting cash flow, for example, might involve no fairness or bias risk but may have some risk of not being effective,” he said. “We decided that efficacy risk should be included along with the ethical risks we evaluate.” The company began to use the term AI assurance to broadly encompass its overview of a tool’s effectiveness and ethics.

The basic idea behind the Unilever AI assurance compliance process is to examine each new AI application to determine how intrinsically risky it is, both in terms of effectiveness and ethics. The company already had a well-defined approach to information security and data privacy, and the goal was to employ a similar approach that would ensure that no AI application was put into production without first being reviewed and approved. Integrating the compliance process into the compliance areas that Unilever already had in place, such as privacy risk assessment, information security, and procurement policies, would be the ultimate sign of success. 

Debbie Cartledge, who took on the role of data and AI ethics strategy lead for the company, explained the process the team adopted: 

When a new AI solution is being planned, the Unilever employee or supplier proposes the outlined use case and method before developing it. This is reviewed internally, with more complex cases being manually assessed by external experts. The proposer is then informed of potential ethical and efficacy risks and mitigations to be considered. After the AI application has been developed, Unilever, or the external party, runs statistical tests to ascertain whether there is a bias or fairness issue and could examine the system for efficacy in achieving its objectives. Over time, we expect that a majority of cases can be fully assessed automatically based on information about the project supplied by the project proposer.

Depending on where within the company the system will be employed, there also might be local regulations for the system to comply with. All resume checking, for example, is now done by human reviewers. If resume checking were fully automated, the review might conclude that the system needs a human in the loop to make final decisions about whether to move a candidate to interview. If there are serious risks that can’t be mitigated, the AI assurance process will reject the application on the grounds that Unilever’s values prohibit it. Final decisions on AI use cases are made by a senior executive board, including representatives from the legal, HR, and data and technology departments. 

Here’s an example: The company has areas in department stores where it sells its cosmetics brands. A project was developed to use computer vision AI to automatically register sales agents’ attendance through daily selfies, with a stretch objective to look at the appropriateness of agents’ appearance. Because of the AI assurance process, the project team broadened their thinking beyond regulations, legality, and efficacy to also consider the potential implications of a fully automated system. They identified the need for human oversight in checking photos flagged as noncompliant and taking responsibility for any consequent actions. 

Working With an Outside Partner, Holistic AI

Unilever’s external partner in the AI assurance process is Holistic AI, a London-based company. Founders Emre Kazim and Adriano Koshiyama have both worked with Unilever AI teams since 2020, and Holistic AI became a formal partner for AI risk assessment in 2021. 

Holistic AI has created a platform to manage the process of reviewing AI assurance. In this context, “AI” is a broad category that encompasses any type of prediction or automation; even an Excel spreadsheet used to score HR candidates would be included in the process. Unilever’s data ethics team uses the platform to review the status of AI projects and can see which new use cases have been submitted; whether the information is complete; and what risk-level assessment they have received, coded red, yellow (termed “amber” in the U.K.), or green. 

The traffic-light status is assessed at three points: at triage, after further analysis, and after final mitigation and assurance. At this final point, the ratings have the following interpretations: A red rating means the AI system does not comply with Unilever standards and should not be deployed; yellow means the AI system has some acceptable risks and the business owner is responsible for being aware of and taking ownership of it; and green means the AI system adds no risks to the process. Only a handful of the several hundred Unilever use cases have received red ratings thus far, including the cosmetics one described above. All of the submitters were able to resolve the issues with their use cases and move them up to a yellow rating.

For leaders of AI projects, the platform is the place to start the review process. They submit a proposed use case with details, including its purpose, the business case, the project’s ownership within Unilever, team composition, the data used, the type of AI technology employed, whether it is being developed internally or by an external vendor, the degree of autonomy, and so forth. The platform uses the information to score the application in terms of its potential risk. The risk domains include explainability, robustness, efficacy, bias, and privacy. Machine learning algorithms are automatically analyzed to determine whether they are biased against any particular group. 

For leaders of AI projects, the platform is the place to start the review process.

An increasing percentage of the evaluations in the Holistic AI platform are based on the European Union’s proposed AI Act, which also ranks AI use cases into three categories of risk (unacceptable, high, and not high enough to be regulated). The act is being negotiated among EU countries with hopes for an agreement by the end of 2023. Kazim and Koshiyama said that even though the act will apply only to European businesses, Unilever and other companies are likely to adopt it globally, as they have with the EU’s General Data Protection Regulation.

Related Articles

Generative AI at Mastercard: Governance Takes Center Stage | Thomas H. Davenport and Randy Bean 

The Impact of Generative AI on Hollywood and Entertainment | Thomas H. Davenport and Randy Bean 

How Northwestern Mutual Embraces AI | Thomas H. Davenport and Randy Bean 

Action and Inaction on Data, Analytics, and AI | Thomas H. Davenport and Randy Bean 

Kazim and Koshiyama expect Holistic AI to be able to aggregate data across companies and benchmark across them in the future. The software could assess benefits versus costs, the efficacy of different external providers of the same use case, and the most effective approaches to AI procurement. Kazim and Koshiyama have also considered making risk ratings public in some cases and partnering with an insurance company to insure AI use cases against certain types of risks. 

We’re still in the early stages of ensuring that companies take ethical approaches to AI, but that doesn’t mean that it’s enough to issue pronouncements and policies with no teeth. Whether AI is ethical or not will be determined use case by use case. Unilever’s AI assurance process, and its partnership with Holistic AI to evaluate each use case about its ethical risk level, is the only current way to ensure that AI systems are aligned with human interests and well-being.

Article link: https://sloanreview.mit.edu/article/ai-ethics-at-unilever-from-policy-to-process/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...