
By ALEXANDRA KELLEYFEBRUARY 27, 2024
In conjunction with nine other countries, the U.S. released six new principles intended to guide a global 6G wireless connectivity adoption.
The White House issued a joint statement together with the governments from Australia, Canada, the Czech Republic, Finland, France, Japan, the Republic of Korea, Sweden and the United Kingdom on a series of new shared principles on 6G spectrum research and development.
Shared on Monday, the six principles are focused on securing global telecommunications infrastructure and will help inform relevant policy adoption. They include installing technology systems that protect national security; secure individual communications and privacy; work with industry partners to set inclusive international standards; cooperate to enable interoperability and innovation; ensure global connectivity is both affordable and sustainable; and manage spectrum allocations.
“We believe this to be an indispensable contribution towards building a more inclusive, sustainable, secure, and peaceful future for all, and call upon other governments, organizations, and stakeholders to join us in supporting and upholding these principles,” the press release reads.
Sixth generation — or 6G — spectrum is the planned step up from 5G that is currently under development in many nations to allow for greater data transmission at a faster rate across digital networks. As the foundation for modern communications, including the personalized and hyperconnected internet-of-things, telecommunications infrastructure and its security at a hardware and software level have become a geopolitical talking point.
The shared principles aim to foster an international agreement in how to develop and deploy more secure 6G technologies and architectures. This includes researching how more emergent systems — namely artificial intelligence, software-defined networking, and virtualization — can be leveraged for greater security and interoperability.
Finland and Sweden are home to telecom companies Nokia and Ericsson, respectively, which were listed as industry partners in research on 5G wireless communications prototyping by the Pentagon in 2020.
By contrast, China — home to telecom provider Huawei, whose systems have been prohibited by U.S. oversight agencies due to spyware and surveillance concerns — is notably absent from the list of nations joining the 6G principles.
“Collaboration and unity are key to resolving pressing challenges in the development of 6G, and we hereby declare our intention to adopt relevant policies to this end in our countries, to encourage the adoption of such policies in third countries, and to advance research and development and standardization of 6G network,” the release said.
Article link: https://www.nextgov.com/emerging-tech/2024/02/us-signs-international-principles-6g/394506/?
An Initial Assessment of Growing Risks
Published Feb 13, 2024
by Tobias Sytsma, James V. Marrone, Anton Shenk, Gabriel Leonard, Lydia Grek, Joshua Steier
- Related Topics:
- Artificial Intelligence,
- Banking and Finance Legislation,
- Cybersecurity,
- Social Media,
- United States
DOWNLOAD EBOOK FOR FREE
PDF file 0.4mb
Research Questions
- What are the emerging threats that the U.S. financial markets face, and what are the risks they pose?
- How might these evolving threats be detected, and what countermeasures might be taken to protect U.S. financial markets?
The resilience and stability of the U.S. financial system is critical to economic prosperity. However, the rapid pace of technological and geopolitical change introduce new potential threats that must be monitored and assessed. The authors of this report explore emerging and understudied threats to the financial system, focusing on risks from social media, advances in artificial intelligence, and the changing role of economic statecraft in geopolitics.
Drawing on historical examples, the economic literature, and discussions with subject-matter experts, the authors assess the potential costs and likelihood of four threats: attacks on financial trading models, bond dumping by foreign holders of U.S. debt, deepfakes used to spread misinformation, and memetic engineering used to manipulate beliefs and behaviors. Their analysis suggests these threats pose a limited near-term risk of significant economic damage because of the interconnectivity of global finance and existing safeguards. However, the gradual erosion of financial resilience and institutional trust over time could make attacks more impactful.
Key Findings
- Emerging threats to U.S. financial markets—such as attacks on AI-enabled financial models, bond dumping, deepfakes, and memetic engineering—pose a limited risk of significant economic damage because of the high costs of such attacks to adversaries and existing market safeguards.
- Risks may increase because of geopolitical tensions and advancing technological capabilities that alter adversaries’ cost–benefit calculations.
- The most significant threat is not an abrupt event, akin to a “financial 9/11,” but rather a slow and steady process, akin to “financial climate change.” This phenomenon could occur when disinformation or misinformation diminishes public trust in markets, consequently complicating the distinction between reality and fabrication and thereby escalating market volatility.
- Data privacy regulations could limit the threat posed by AI-augmented disinformation campaigns. Regulations would make it harder for malicious actors to collect detailed data that could be used to create and disseminate highly customized messages to influence individual behavior.
Recommendations
- Proactive risk management solutions could prevent losses. Advancing AI is likely to play a key role in determining whether an adversary can carry out a successful attack. Thus, policies that encourage the responsible development and use of AI technologies can act as important safeguards to financial stability.
- Apart from regulatory policy, economic policies that encourage competition in the AI foundation model market could also increase resilience in the financial sector. Reliance on third-party foundation models could create new sources of systemic risk in the financial sector. Encouraging competition in the market could expand the options available to financial institutions looking to incorporate AI into their operations.
- To address evolving threats, regular economic wargames that simulate and assess financial vulnerabilities should be implemented. These exercises would identify weaknesses and shape proactive countermeasures. Previous research has proposed an “Economic Joint Chiefs” framework, which could enhance these wargames by providing a dedicated structure and centralized expertise for their coordination and analysis.
- Bringing together stakeholders in finance, technology, geopolitics, and national security could result in a new understanding of how these threats might evolve over the next decade.
This research was conducted within the International Security and Defense Policy Program of the RAND National Security Research Division.
This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
Article link: https://www.rand.org/pubs/research_reports/RRA2533-1.html?
by The Dalai Lama with Rasmus Hougaard
February 20, 2019

Summary.
The Dalai Lama shares his observations on leadership and describes how our “strong focus on material development and accumulating wealth has led us to neglect our basic human need for kindness and care.” He offers leaders three recommendations. First, to be mindful: “When we’re under the sway of anger or attachment, we’re limited in our ability to take a full and realistic view of the situation.” Also, to be selfless: “Once you have a genuine sense of concern for others, there’s no room for cheating, bullying, or exploitation; instead you can be honest, truthful, and transparent in your conduct.” And finally, to be compassionate: “When the mind is compassionate, it is calm and we’re able to use our sense of reason practically, realistically, and with determination.”
Over the past nearly 60 years, I have engaged with many leaders of governments, companies, and other organizations, and I have observed how our societies have developed and changed. I am happy to share some of my observations in case others may benefit from what I have learned.
Leaders, whatever field they work in, have a strong impact on people’s lives and on how the world develops. We should remember that we are visitors on this planet. We are here for 90 or 100 years at the most. During this time, we should work to leave the world a better place.
What might a better world look like? I believe the answer is straightforward: A better world is one where people are happier. Why? Because all human beings want to be happy, and no one wants to suffer. Our desire for happiness is something we all have in common.
But today, the world seems to be facing an emotional crisis. Rates of stress, anxiety, and depression are higher than ever. The gap between rich and poor and between CEOs and employees is at a historic high. And the focus on turning a profit often overrules a commitment to people, the environment, or society.
I consider our tendency to see each other in terms of “us” and “them” as stemming from ignorance of our interdependence. As participants in the same global economy, we depend on each other, while changes in the climate and the global environment affect us all. What’s more, as human beings, we are physically, mentally, and emotionally the same.
Look at bees. They have no constitution, police, or moral training, but they work together in order to survive. Though they may occasionally squabble, the colony survives on the basis of cooperation. Human beings, on the other hand, have constitutions, complex legal systems, and police forces; we have remarkable intelligence and a great capacity for love and affection. Yet, despite our many extraordinary qualities, we seem less able to cooperate.
In organizations, people work closely together every day. But despite working together, many feel lonely and stressed. Even though we are social animals, there is a lack of responsibility toward each other. We need to ask ourselves what’s going wrong.
I believe that our strong focus on material development and accumulating wealth has led us to neglect our basic human need for kindness and care. Reinstating a commitment to the oneness of humanity and altruism toward our brothers and sisters is fundamental for societies and organizations and their individuals to thrive in the long run. Every one of us has a responsibility to make this happen.
What can leaders do?
Be mindful
Cultivate peace of mind. As human beings, we have a remarkable intelligence that allows us to analyze and plan for the future. We have language that enables us to communicate what we have understood to others. Since destructive emotions like anger and attachment cloud our ability to use our intelligence clearly, we need to tackle them.
Fear and anxiety easily give way to anger and violence. The opposite of fear is trust, which, related to warmheartedness, boosts our self-confidence. Compassion also reduces fear, reflecting as it does a concern for others’ well-being. This, not money and power, is what really attracts friends. When we’re under the sway of anger or attachment, we’re limited in our ability to take a full and realistic view of the situation. When the mind is compassionate, it is calm and we’re able to use our sense of reason practically, realistically, and with determination.
Be selfless
We are naturally driven by self-interest; it’s necessary to survive. But we need wise self-interest that is generous and cooperative, taking others’ interests into account. Cooperation comes from friendship, friendship comes from trust, and trust comes from kindheartedness. Once you have a genuine sense of concern for others, there’s no room for cheating, bullying, or exploitation; instead, you can be honest, truthful, and transparent in your conduct.
Be compassionate
The ultimate source of a happy life is warmheartedness. Even animals display some sense of compassion. When it comes to human beings, compassion can be combined with intelligence. Through the application of reason, compassion can be extended to all 7 billion human beings. Destructive emotions are related to ignorance, while compassion is a constructive emotion related to intelligence. Consequently, it can be taught and learned.
The source of a happy life is within us. Troublemakers in many parts of the world are often quite well-educated, so it is not just education that we need. What we need is to pay attention to inner values.
The distinction between violence and nonviolence lies less in the nature of a particular action and more in the motivation behind the action. Actions motivated by anger and greed tend to be violent, whereas those motivated by compassion and concern for others are generally peaceful. We won’t bring about peace in the world merely by praying for it; we have to take steps to tackle the violence and corruption that disrupt peace. We can’t expect change if we don’t take action.
Peace also means being undisturbed, free from danger. It relates to our mental attitude and whether we have a calm mind. What is crucial to realize is that, ultimately, peace of mind is within us; it requires that we develop a warm heart and use our intelligence. People often don’t realize that warmheartedness, compassion, and love are actually factors for our survival.
Buddhist tradition describes three styles of compassionate leadership: the trailblazer, who leads from the front, takes risks, and sets an example; the ferryman, who accompanies those in his care and shapes the ups and downs of the crossing; and the shepherd, who sees every one of his flock into safety before himself. Three styles, three approaches, but what they have in common is an all-encompassing concern for the welfare of those they lead.
The Dalai Lama is the spiritual leader of the Tibetan People. He was awarded the Nobel Peace Prize in 1989 and the U.S. Congressional Gold Medal in 2007. Rasmus Hougaard is the founder and managing director of Potential Project, a global leadership and organizational development firm, and the coauthor of the new book, The Mind of the Leader: How to Lead Yourself, Your People, and Your Organization for Extraordinary Results. He has created an app that will help you develop mindfulness, selflessness, and compassion in your leadership.
Article link: https://hbr.org/2019/02/the-dalai-lama-on-why-leaders-should-be-mindful-selfless-and-compassionate?
Science is about to become much more exciting—and that will affect us all, argues Google’s former CEO.
By Eric Schmidt July 5, 2023

It’s yet another summer of extreme weather, with unprecedented heat waves, wildfires, and floods battering countries around the world. In response to the challenge of accurately predicting such extremes, semiconductor giant Nvidia is building an AI-powered “digital twin” for the entire planet.
This digital twin, called Earth-2, will use predictions from FourCastNet, an AI model that uses tens of terabytes of Earth system data and can predict the next two weeks of weather tens of thousands of times faster and more accurately than current forecasting methods.
Usual weather prediction systems have the capacity to generate around 50 predictions for the week ahead. FourCastNet can instead predict thousands of possibilities, accurately capturing the risk of rare but deadly disasters and thereby giving vulnerable populations valuable time to prepare and evacuate.
The hoped-for revolution in climate modeling is just the beginning. With the advent of AI, science is about to become much more exciting—and in some ways unrecognizable. The reverberations of this shift will be felt far outside the lab; they will affect us all.
If we play our cards right, with sensible regulation and proper support for innovative uses of AI to address science’s most pressing issues, AI can rewrite the scientific process. We can build a future where AI-powered tools will both save us from mindless and time-consuming labor and also lead us to creative inventions and discoveries, encouraging breakthroughs that would otherwise take decades.
AI in recent months has become almost synonymous with large language models, or LLMs, but in science there are a multitude of different model architectures that may have even bigger impacts. In the past decade, most progress in science has come through smaller, “classical” models focused on specific questions. These models have already brought about profound advances. More recently, larger deep-learning models that are beginning to incorporate cross-domain knowledge and generative AI have expanded what is possible.
Scientists at McMaster and MIT, for example, used an AI model to identify an antibiotic to combat a pathogen that the World Health Organization labeled one of the world’s most dangerous antibiotic-resistant bacteria for hospital patients. A Google DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean-energy revolution. Within health care, the US Food and Drug Administration has already cleared 523 devices that use AI—75% of them for use in radiology.
Reimagining science
At its core, the scientific process we all learned in elementary school will remain the same: conduct background research, identify a hypothesis, test it through experimentation, analyze the collected data, and reach a conclusion. But AI has the potential to revolutionize how each of these components looks in the future.
Artificial intelligence is already transforming how some scientists conduct literature reviews. Tools like PaperQA and Elicit harness LLMs to scan databases of articles and produce succinct and accurate summaries of the existing literature—citations included.
Once the literature review is complete, scientists form a hypothesis to be tested. LLMs at their core work by predicting the next word in a sentence, building up to entire sentences and paragraphs. This technique makes LLMs uniquely suited to scaled problems intrinsic to science’s hierarchical structure and could enable them to predict the next big discovery in physics or biology.
AI can also spread the search net for hypotheses wider and narrow the net more quickly. As a result, AI tools can help formulate stronger hypotheses, such as models that spit out more promising candidates for new drugs. We’re already seeing simulations running multiple orders of magnitude faster than just a few years ago, allowing scientists to try more design options in simulation before carrying out real-world experiments.
Scientists at Caltech, for example, used an AI fluid simulation model to automatically design a better catheter that prevents bacteria from swimming upstream and causing infections. This kind of ability will fundamentally shift the incremental process of scientific discovery, allowing researchers to design for the optimal solution from the outset rather than progress through a long line of progressively better designs, as we saw in years of innovation on filaments in lightbulb design.
Moving on to the experimentation step, AI will be able to conduct experiments faster, cheaper, and at greater scale. For example, we can build AI-powered machines with hundreds of micropipettes running day and night to create samples at a rate no human could match. Instead of limiting themselves to just six experiments, scientists can use AI tools to run a thousand.
Scientists who are worried about their next grant, publication, or tenure process will no longer be bound to safe experiments with the highest odds of success; they will be free to pursue bolder and more interdisciplinary hypotheses. When evaluating new molecules, for example, researchers tend to stick to candidates similar in structure to those we already know, but AI models do not have to have the same biases and constraints.
Eventually, much of science will be conducted at “self-driving labs”—automated robotic platforms combined with artificial intelligence. Here, we can bring AI prowess from the digital realm into the physical world. Such self-driving labs are already emerging at companies like Emerald Cloud Laband Artificial and even at Argonne National Laboratory.
Finally, at the stage of analysis and conclusion, self-driving labs will move beyond automation and, informed by experimental results they produced, use LLMs to interpret the results and recommend the next experiment to run. Then, as partners in the research process, the AI lab assistant could order supplies to replace those used in earlier experiments and set up and run the next recommended experiments overnight, with results ready to deliver in the morning—all while the experimenter is home sleeping.
Possibilities and limitations
Young researchers might be shifting nervously in their seats at the prospect. Luckily, the new jobs that emerge from this revolution are likely to be more creative and less mindless than most current lab work.
AI tools can lower the barrier to entry for new scientists and open up opportunities to those traditionally excluded from the field. With LLMs able to assist in building code, STEM students will no longer have to master obscure coding languages, opening the doors of the ivory tower to new, nontraditional talent and making it easier for scientists to engage with fields beyond their own. Soon, specifically trained LLMs might move beyond offering first drafts of written work like grant proposals and might be developed to offer “peer” reviews of new papers alongside human reviewers.
AI tools have incredible potential, but we must recognize where the human touch is still important and avoid running before we can walk. For example, successfully melding AI and robotics through self-driving labs will not be easy. There is a lot of tacit knowledge that scientists learn in labs that is difficult to pass to AI-powered robotics. Similarly, we should be cognizant of the limitations—and even hallucinations—of current LLMs before we offload much of our paperwork, research, and analysis to them.
Companies like OpenAI and DeepMind are still leading the way in new breakthroughs, models, and research papers, but the current dominance of industry won’t last forever. DeepMind has so far excelled by focusing on well-defined problems with clear objectives and metrics. One of its most famous successes came at the Critical Assessment of Structure Prediction, a biennial competition where research teams predict a protein’s exact shape from the order of its amino acids.
From 2006 to 2016, the average score in the hardest category ranged from around 30 to 40 on CASP’s scale of 1 to 100. Suddenly, in 2018, DeepMind’s AlphaFold model scored a whopping 58. An updated version called AlphaFold2 scored 87 two years later, leaving its human competitors even further in the dust.
Thanks to open-source resources, we’re beginning to see a pattern where industry hits certain benchmarks and then academia steps in to refine the model. After DeepMind’s release of AlphaFold, Minkyung Baek and David Baker at the University of Washington released RoseTTAFold, which uses DeepMind’s framework to predict the structures of protein complexesinstead of only the single protein structures that AlphaFold could originally handle. More important, academics are more shielded from the competitive pressures of the market, so they can venture beyond the well-defined problems and measurable successes that attract DeepMind.
In addition to reaching new heights, AI can help verify what we already know by addressing science’s replicability crisis. Around 70% of scientists report having been unable to reproduce another scientist’s experiment—a disheartening figure. As AI lowers the cost and effort of running experiments, it will in some cases be easier to replicate results or conclude that they can’t be replicated, contributing to a greater trust in science.
The key to replicability and trust is transparency. In an ideal world, everything in science would be open access, from articles without paywalls to open-source data, code, and models. Sadly, with the dangers that such models are able to unleash, it isn’t always realistic to make all models open source. In many cases, the risks of being completely transparent outweigh the benefits of trust and equity. Nevertheless, to the extent that we can be transparent with models—especially classical AI models with more limited uses—we should be.
The importance of regulation
With all these areas, it’s essential to remember the inherent limitations and risks of artificial intelligence. AI is such a powerful tool because it allows humans to accomplish more with less: less time, less education, less equipment. But these capabilities make it a dangerous weapon in the wrong hands. Andrew White, a professor at the University of Rochester, was contracted by OpenAI to participate in a “red team” that could expose GPT-4’s risks before it was released. Using the language model and giving it access to tools, White found it could propose dangerous compounds and even order them from a chemical supplier. To test the process, he had a (safe) test compound shipped to his house the next week. OpenAI says it used his findings to tweak GPT-4 before it was released.
Even humans with entirely good intentions can still prompt AIs to produce bad outcomes. We should worry less about creating the Terminator and, as computer scientist Stuart Russell has put it, more about becoming King Midas, who wished for everything he touched to turn to gold and thereby accidentally killed his daughter with a hug.
We have no mechanism to prompt an AI to change its goal, even when it reacts to its goal in a way we don’t anticipate. One oft-cited hypothetical asks you to imagine telling an AI to produce as many paper clips as possible. Determined to accomplish its goal, the model hijacks the electrical grid and kills any human who tries to stop it as the paper clips keep piling up. The world is left in shambles. The AI pats itself on the back; it has done its job. (In a wink to this famous thought experiment, many OpenAI employees carry around branded paper clips.)
OpenAI has managed to implement an impressive array of safeguards, but these will only remain in place as long as GPT-4 is housed on OpenAI’s servers. The day will likely soon come when someone manages to copy the model and house it on their own servers. Such frontier models need to be protected to prevent thieves from removing the AI safety guardrails so carefully added by their original developers.
To address both intentional and unintentional bad uses of AI, we need smart, well-informed regulation—on both tech giants and open-source models—that doesn’t keep us from using AI in ways that can be beneficial to science. Although tech companies have made strides in AI safety, government regulators are currently woefully underprepared to enact proper laws and should take greater steps to educate themselves on the latest developments.
Beyond regulation, governments—along with philanthropy—can support scientific projects with a high social return but little financial return or academic incentive. Several areas are especially urgent, including climate change, biosecurity, and pandemic preparedness. It is in these areas where we most need the speed and scale that AI simulations and self-driving labs offer.
Government can also help develop large, high-quality data sets such as those on which AlphaFold relied—insofar as safety concerns allow. Open data sets are public goods: they benefit many researchers, but researchers have little incentive to create them themselves. Government and philanthropic organizations can work with universities and companies to pinpoint seminal challenges in science that would benefit from access to powerful databases.
Chemistry, for example, has one language that unites the field, which would seem to lend itself to easy analysis by AI models. But no one has properly aggregated data on molecular properties stored across dozens of databases, which keeps us from accessing insights into the field that would be within reach of AI models if we had a single source. Biology, meanwhile, lacks the known and calculable data that underlies physics or chemistry, with subfields like intrinsically disordered proteins that are still mysterious to us. It will therefore require a more concerted effort to understand—and even record—the data for an aggregated database.
The road ahead to broad AI adoption in the sciences is long, with a lot that we must get right, from building the right databases to implementing the right regulations, mitigating biases in AI algorithms to ensuring equal access to computing resources across borders.
Nevertheless, this is a profoundly optimistic moment. Previous paradigm shifts in science, like the emergence of the scientific process or big data, have been inwardly focused—making science more precise, accurate, and methodical. AI, meanwhile, is expansive, allowing us to combine information in novel ways and bring creativity and progress in the sciences to new heights.
Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, applying science and technology, and bringing people together across fields.
Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
New from McKinsey & Company
What telcos need to know
Mobile World Congress (MWC)—the biggest event in the connectivity industry—wrapped up in Barcelona on Thursday. The show featured a barrage of innovative technology, including AI wearables, hyper-electric vehicles, and even robot dogs. But it also fostered meaningful conversations on crucial topics, including how telcos can seize growth opportunities and build resilience in an unpredictable market. “It’s one of the most intellectually stimulating events of the year, where you get to meet with all the leaders and shapers of the industry,” says McKinsey’s Jorge Amar. “I always come back energized.” With MWC in the rearview, check out these seven key insights shaping the future of the telecom industry, and listen in to a new episode of The McKinsey Podcast with McKinsey’s Ferry Grijpink to get his key takeaways from the event.Read more
1. Telcos could achieve significant impact with generative AI.
Read: How generative AI could revitalize profitability for telcos
2. Network APIs offer telcos a chance to generate sizable returns on their 5G investments.
Read: What it will take for telcos to unlock value from network APIs
3. 6G will offer telcos the opportunity to revitalize the industry.
Read: Shaping the future of 6G
4. Cybersecurity is the most important tech need in 2024.
Read: Technology and telecommunications B2B customer buying trends: Bright horizons with some warning signs
5. The fiber-to-the-home (FTTH) market offers significant growth potential.
Read: The keys to deploying fiber networks faster and cheaper
6. Pressure to decarbonize the sector will mount.
Read: The growing imperative of energy optimization for telco networks
7. Network customer experience (CX) is a growing priority.
Read: The network is the product: How AI can put telco customer experience in focus
To see more essential reading on topics that matter, visit McKinsey Themes.
— Edited by Eleni Kostopoulos,
FEBRUARY 26, 2024, 12:00 PM EST
With an improved Partner Plus program and a mandate that all products be ‘channel-friendly,’ IBM CEO Arvind Krishna aims to bring partners into the enterprise AI market that sits below the surface of today’s trendy use cases.
IBM CEO Arvind Krishna On GenAI’s ‘Big New Wave’
“We’ve completely revamped our strategy of how we work with partners,” IBM CEO Arvind Krishna said in an interview with CRN. Play Video
To hear IBM Chairman and CEO Arvind Krishna tell it, the artificial intelligence market is like an iceberg. For now, most vendors and users are attracted by the use cases above the surface—using text generators to write emails and image generators to make art, for example.
But it’s the enterprise AI market below the surface that IBM wants to serve with its partners, Krishna told CRN in a recent interview. And Krishna’s mandate that the Armonk, N.Y.-based vendor reach 50 percent of its revenue from the channel over the next two to three years is key to reaching that hidden treasure.
“This is a massive market,” said Krishna. “When I look at all the estimates … the numbers are so big that it is hard for most people to comprehend them. … That tells you that there is a lot of opportunity for a large number of us.”
In 2023, IBM moved channel-generated sales from the low 20 percent to about 30 percent of total revenue. And IBM channel chief Kate Woolley, general manager of the IBM ecosystem—perhaps best viewed as the captain of the channel initiative—told CRN that she is up to the challenge.
“Arvind’s set a pretty big goal for us,” Woolley said. “Arvind’s been clear on the percent of revenue of IBM technology with partners. And my goal is to make a very big dent in that this year.”
GenAI as a whole has the potential to generate value equivalent of up to $4.4 trillion in global corporate profits annually, according to McKinsey research Krishna follows. That number includes up to an additional $340 billion a year in value for the banking sector and up to an additional $660 billion in operating profits annually in the retail and consumer packaged goods sector.
Tackling that demand—working with partners to make AI a reality at scale in 2024 and 2025—is part of why Krishna mandated more investment in IBM’s partner program, revamped in January 2023 as Partner Plus.
“What we have to offer [partners] is growth,” Krishna said. “And what we also have to offer them is an attractive market where the clients like these technologies. … It’s important [for vendors] to bring the innovation and to bring the demand from the market to the table. And [partners] should put that onus on us.”
Multiple IBM partners told CRN they are seeing the benefits of changes IBM has made to Partner Plus, from better aligning the goals of IBM sellers with the channel to better aligning certifications and badges with product offerings, to increasing access to IBM experts and innovation labs.
And even though the generative AI market is still in its infancy, IBM partners are bullish about the opportunities ahead.
Channel Attachment On The Product Side
Krishna’s mandate for IBM to work more closely with partners has implications for IBM’s product plans.
“Any new product has to be channel-friendly,” Krishna said. “I can’t think of one product I would want to build or bring to market unless we could also give it to the channel. I wouldn’t say that was always historically true. But today, I can state that with absolute conviction.”
Krishna estimated that about 30 percent of the IBM product business is sold with a partner in the mix today. “Half of that I’m not sure we would even get without the partner,” he said.
And GenAI is not just a fad to the IBM CEO. It is a new way of doing business.
“It is going to generate business value for our clients,” Krishna said. “Our Watsonx platform to really help developers, whether it’s code, whether it’s modernization, all those things. … these are areas where, for our partners … they’ll be looking at this and say, ‘This is how we can bring a lot of innovation to our clients and help their business along the way.’”
Some of the most practical and urgent business use cases for IBM include improved customer contact center experiences, code generation to help customers rewrite COBOL and legacy languages for modern ones, and the ability for customers to choose better wealth management products based on population segments.
Watsonx Code Assistant for Z became generally available toward the end of 2023 and allows modernization of COBOL to Java. Meanwhile, Red Hat Ansible Lightspeed with IBM Watsonx Code Assistant, which provides GenAI-powered content recommendations from plain-English inputs, also became generally available late last year.
Multiple IBM partners told CRN that IBM AI and Red Hat Ansible automation technologies are key to meeting customer code and content generation demand.
One of those interested partners is Tallahassee, Fla.-based Mainline Information Systems, an honoree on CRN’s 2024 MSP 500. Mainline President and CEO Jeff Dobbelaere said code generation cuts across a variety of verticals, making it easy to scale that offering and meet the demands of mainframe customers modernizing their systems.
“We have a number of customers that have legacy code that they’re running and have been for 20, 30, 40 years and need to find a path to more modern systems,” Dobbelaere said. “And we see IBM’s focus on generative AI for code as a path to get there… We’re still in [GenAI’s] infancy, and the sky’s the limit. We’ll see where it can go and where it can take us. But we’re starting to see some positive results already out of the Watsonx portfolio.”
As part of IBM’s investment in its partner program, the vendor will offer more technical help to partners, Krishna said. This includes client engineering, customer success managers and more resources “to make their end client even more happy.”
An example of IBM’s client success team working with a partner comes from one of the vendor’s more recent additions to the ecosystem—Phoenix-based NucleusTeq, founded in 2018 and focused on enterprise data modernization, big data engineering and AI and machine learning services.
Will Sellenraad, the solution provider’s executive vice president and CRO, told CRN that a law firm customer was seeking a way to automate labor needed for health disability claims for veterans.
“What we were able to do is take the information from this law firm to our client success team within IBM, do a proof of concept and show that we can go from 100 percent manual to 60 percent automation, which we think we can get even [better],” Sellenraad said.
Woolley said that part of realizing Krishna’s demand for channel-friendly new products is getting her organization to work more closely with product teams to make sure partners have access to training, trials, demos, digital marketing kits and “pricing and packaging that makes sense for partners, no matter whether they’re selling to very large enterprises or to smaller enterprises.”
Partner Plus Changes
Woolley said her goals for 2024 include adding new services-led and other partners to the ecosystem and getting more resources to them.
In January, IBM launched a service-specific track for Partner Plus members. Meanwhile, reaching 50 percent revenue with the channel means attaching more partners to the AI portfolio, Woolley said.
“There is unprecedented demand from partners to be able to leverage IBM’s strength in our AI portfolio and bring this to their clients or use it to enhance their products. That is a huge opportunity.”
Her goal for Partner Plus is to create a flexible program that meets the needs of partners of various sizes with a range of technological expertise. “For resell partners, today we have a range from the largest global resell partners and distributors right down to niche, three-person resell partners that are deeply technical on a part of the IBM portfolio,” she said. “We love that. We want that expertise in the market.”
NucleusTeq’s Sellenraad offered CRN the perspective of a past IBM partner that came back to the ecosystem. He joined NucleusTeq about two years ago—before the solution provider was an IBM partner—from an ISV that partnered with IBM.
Sellenraad steered the six-year-old startup into growing beyond being a Google, Microsoft and Amazon Web Services partner. He thought IBM’s product range, including its AI portfolio, was a good fit, and the changes in IBM’s partner program encouraged him to not only look more closely, but to make IBM a primary partner.
“They’re committed to the channel,” he said. “We have a great opportunity to really increase our sales this year.”
NucleusTeq became a new IBM partner in January 2023 and reached Gold partner status by the end of the year. It delivered more than $5 million in sales, and more than seven employees received certifications for the IBM portfolio.
Krishna said that the new Partner Plus portal and program also aim to make rebates, commissions and other incentives easier to attain for partners.
The creation of Partner Plus—a “fundamental and hard shift” in how IBM does business, Krishna said—resulted in IBM’s promise to sell to millions of clients only through partners, leaving about 500 accounts worldwide that “want and demand a direct relationship with IBM.”
“So 99.9 percent of the market, we only want to go with a channel partner,” Krishna said. “We do not want to go alone.”
When asked by CRN whether he views more resources for the channel as a cost of doing business, he said that channel-friendliness is his philosophy and good business.
“Not only is it my psychology or my whimsy, it’s economically rational to work well with the channel,” he continued. “That’s why you always hear me talk about it. … There are very large parts of the market which we cannot address except with the channel. So … by definition, the channel is not a tradeoff. … It is a fundamental part of the business equation of how we go get there.”
Customers’ Appetite For AI
Multiple IBM partners who spoke with CRN said AI can serve an important function in much of the work that they handle, including modernizing customer use of IBM mainframes.
Paola Doebel, senior vice president of North America at Downers Grove, Ill.-based IBM partner Ensono—an honoree on CRN’s 2024 MSP 500—told CRN that the MSP will focus this year on its modern cloud-connected mainframe service for customers, and AI-backed capabilities will allow it to achieve that work at scale.
While many of Ensono’s conversations with customers have been focused on AI level-setting—what’s hype, what’s realistic—the conversations have been helpful for the MSP.
“There is a lot of hype, there is a lot of conversation, but some of that excitement is grounded in actual real solutions that enable us to accelerate outcomes,” Doebel said. “Some of that hype is just hype, like it always is with everything. But it’s not all smoke. There is actual real fire here.”
For example, early use cases for Ensono customers using the MSP’s cloud-connected mainframe solution, which can leverage AI, include real-time fraud detection, real-time data availability for traders, and connecting mainframe data to cloud applications, she said.
Mainline’s Dobbelaere said that as a solution provider, his company has to be cautious about where it makes investments in new technologies. “There are a lot of technologies that come and go, and there may or may not be opportunity for the channel,” he said.
But the interest in GenAI from vendor partners and customers proved to him that the opportunity in the emerging technology is strong.
Delivering GenAI solutions wasn’t a huge lift for Mainline, which already had employees trained on data and business analytics, x86 technologies and accelerators from Nvidia and AMD. “The channel is uniquely positioned to bring together solutions that cross vendors,” he said.
The capital costs of implementing GenAI, however, are still a concern in an environment where the U.S. faces high inflation rates and global geopolitics threaten the macroeconomy. Multiple IBM partners told CRN they are seeing customers more deeply scrutinize technology spending, lengthening the sales cycle.
Ensono’s Doebel said that customers are asking more questions about value and ROI.
“The business case to execute something at scale has to be verified, justified and quantified,” Doebel said. “So it’s a couple of extra steps in the process to adopt anything new. Or they’re planning for something in the future that they’re trying to get budget for in a year or two.”
She said she sees the behavior continuing in 2024, but solution providers such as Ensono are ready to help customers’ employees make the AI case with board-ready content, analytical business cases, quantitative outputs, ROI theses and other materials, she said.
For partners navigating capital cost as an obstacle to selling customers on AI, Woolley encouraged them to work with IBM sellers in their territories.
Dayn Kelley, director of strategic alliances for Irvine, Calif.-based IBM partner Technologent—No. 61 on CRN’s 2023 Solution Provider 500—said customers have expressed so much interest in and concern around AI that the solution provider has built a dedicated team focused on the technology as part of its investments toward taking a leadership position in the space.
“We have customers we need to support,” Kelley said. “We need to be at the forefront.”
He said that he has worked with customers on navigating financials and challenging project schedules to meet budget concerns—and IBM has been a particularly helpful partner in this area.
While some Technologent customers are weathering economic challenges, the outlook for 2024 is still strong, he said. Customer AI and emerging technology projects are still forecast for this year.
Mainline’s Dobbelaere said that despite reports around economic concerns and conservative spending that usually occurs in an election year, he’s still optimistic about tech spending overall in 2024.
“2023 was a very good year for us. It looks like we outpaced 2022,” he said. “And there’s no reason for us to believe that 2024 would be any different. So we are optimistic.”
AI Education Still Imperative
Juan Orlandini, CTO of the North America branch of Chandler, Ariz.-based IBM partner Insight Enterprises—No. 16 on CRN’s 2023 Solution Provider 500—said educating customers on AI hype versus AI reality is still a big part of the job.
In 2023, Orlandini made 60 trips in North America to conduct seminars and meet with customers and partners to set expectations around the technology and answer questions from organizations large and small.
He recalled walking one customer through the prompts he used to create a particular piece of artwork with GenAI. In another example, one of the largest media companies in the world consulted with him on how to leverage AI without leaking intellectual property or consuming someone else’s. “It doesn’t matter what size the organization, you very much have to go through this process of making sure that you have the right outcome with the right technology decision,” Orlandini said.
“There’s a lot of hype and marketing. Everybody and their brother is doing AI now … and that is confusing [customers].”
An important role of AI-minded solution providers, Orlandini said, is assessing whether it is even the right technology for the job.
“People sometimes give GenAI the magical superpowers of predicting the future. It cannot. … You have to worry about making sure that some of the hype gets taken care of,” Orlandini said.
Most users won’t create foundational AI models, and most larger organizations will adopt AI and modify it, publishing AI apps for internal or external use. And everyone will consume AI within apps, he said.
The AI hype is not solely vendor-driven. Orlandini has also interacted with executives at customers who have added mandates and opened budgets for at least testing AI as a way to grow revenue or save costs.
“There has been a huge amount of pressure to go and adopt anything that does that so they can get a report back and say, ‘We tried it, and it’s awesome.’ Or, ‘We tried it and it didn’t meet our needs,’” he said. “So we have seen very much that there is an opening of pocketbooks. But we’ve also seen that some people start and then they’re like, ‘Oh, wait, this is a lot more involved than we thought.’ And then they’re taking a step back and a more measured approach.”
Jason Eichenholz, senior vice president and global head of ecosystems and partnerships at Wipro — an India-based IBM partner of more than 20 years and No. 15 on CRN’s 2023 Solution Provider 500—told CRN that at the end of last year, customers were developing GenAI use cases and establishing 2024 budgets “to start deploying either proofs of concept into production or to start working on new production initiatives.”
For Wipro’s IBM practice, one of the biggest opportunities is IBM’s position as a more neutral technology stack—akin to its reputation in the cloud market—that works with other foundation models, which should resonate with the Wipro customer base that wants purpose-built AI models, he said.
Just as customers look to Wipro and other solution providers as neutral orchestrators of technology, IBM is becoming more of an orchestrator of platforms, he said.
For his part, Krishna believes that customers will consume new AI offerings as a service on the cloud. IBM can run AI on its cloud, on the customer’s premises and in competing clouds from Microsoft and Amazon Web Services.
He also believes that no single vendor will dominate AI. He likened it to the automobile market. “It’s like saying, ‘Should there be only one car company?’ There are many because [the market] is fit for purpose. Somebody is great at sports cars. Somebody is great at family sedans, somebody’s great at SUVs, somebody’s great at pickups,” he said.
“There are going to be spaces [within AI where] we would definitely like to be considered leaders—whether that is No. 1, 2 or 3 … in the enterprise AI space,” he continued. “Whether we want to work with people on … modernizing their developer environment, on helping them with their contact centers, absolutely. In those spaces, we’d like to get to a good market position.”
He said that he views other AI vendors not as competitors, but partners. “When you play together and you service the client, I actually believe we all tend to win,” he said. “If you think of it as a zero-sum game, that means it is either us or them. If I tend to think of it as a win-win-win, then you can actually expand the pie. So even a small slice of a big pie is more pie than all of a small pie.”
IBM Partners Look Ahead
All of the IBM partners who spoke with CRN praised the changes to the partner program.
Wipro’s Eichenholz said that “we feel like we’re being heard in terms of our feedback and our recommendations.” He called Krishna “super supportive of the … partner ecosystem.”
Looking ahead, Eichenholz said he would like to see consistent pricing from IBM and its distributors so that he spends less time shopping for customers. He also encouraged IBM to keep investing in integration and orchestration.
“For us, in terms of what we look for from a partner, in terms of technical enablement, financial incentives and co-creation and resource availability, they are best of breed right now,” he said. “IBM is really putting their money and their resources where their mouth is. … We expect 2024 to be the year of the builder for generative AI, but also the year of the partner for IBM partners.”
Mainline’s Dobbelaere said that IBM is on the right track in sharing more education, sandboxing resources and use cases with partners. He looks forward to use cases with more repeatability.
“Ultimately, use cases are the most important,” he said. “And they will continue to evolve. It’s difficult for the channel to create bespoke solutions for each and every customer to solve their unique challenges. And the more use cases we have that provide some repeatability, the more that will allow the channel to thrive.”

Leaders in Industry Support White House Call to Address Root Cause of Many of the Worst Cyber Attacks
Read the full report here
WASHINGTON – Today, the White House Office of the National Cyber Director (ONCD) released a report calling on the technical community to proactively reduce the attack surface in cyberspace. ONCD makes the case that technology manufacturers can prevent entire classes of vulnerabilities from entering the digital ecosystem by adopting memory safe programming languages. ONCD is also encouraging the research community to address the problem of software measurability to enable the development of better diagnostics that measure cybersecurity quality.
The report is titled “Back to the Building Blocks: A Path Toward Secure and Measurable Software.”
“We, as a nation, have the ability – and the responsibility – to reduce the attack surface in cyberspace and prevent entire classes of security bugs from entering the digital ecosystem but that means we need to tackle the hard problem of moving to memory safe programming languages,” said National Cyber Director Harry Coker. “Thanks to the work of our ONCD team and some tremendous collaboration from the technical community and our public and private sector partners, the report released today outlines the threat and opportunity available to us as we move toward a future where software is memory safe and secure by design. I’m also pleased that we are working with and calling on the academic community to help us solve another hard problem: how do we develop better diagnostics to measure cybersecurity quality? Addressing these challenges is imperative to ensuring we can secure our digital ecosystem long-term and protect the security of our Nation.”
By adopting an engineering-forward approach to policymaking, ONCD is ensuring that the technical community’s expertise is reflected in how the Federal Government approaches these problems. Creators of software and hardware can have an outsized impact on the Nation’s shared security by factoring cybersecurity outcomes into the manufacturing process.
“Some of the most infamous cyber events in history – the Morris worm of 1988, the Slammer worm of 2003, the Heartbleed vulnerability in 2014, the Trident exploit of 2016, the Blastpass exploit of 2023 – were headline-grabbing cyberattacks that caused real-world damage to the systems that society relies on every day. Underlying all of them is a common root cause: memory safety vulnerabilities. For thirty-five years, memory safety vulnerabilities have plagued the digital ecosystem, but it doesn’t have to be this way,” says Anjana Rajan, Assistant National Cyber Director for Technology Security. “This report was created for engineers by engineers because we know they can make the architecture and design decisions about the building blocks they consume – and this will have a tremendous effect on our ability to reduce the threat surface, protect the digital ecosystem and ultimately, the Nation.”
ONCD has engaged with a diverse group of stakeholders, rallying them to join the Administration’s effort. Statements of support from leaders across academia, civil society, and industry can be found here.
In line with two major themes of the President’s National Cybersecurity Strategy released nearly one year ago, the report released today takes an important step toward shifting the responsibility of cybersecurity away from individuals and small businesses and onto large organizations like technology companies and the Federal Government that are more capable of managing the ever-evolving threat. This work also aligns with and builds upon secure by design programs and research and development efforts from across the Federal Government, including those led by CISA, NSA, FBI, and NIST.
The work on memory safety in the report complements interest from Congress on this topic. This includes the efforts of the U.S. Senate and House Appropriations Committees, who included directive report language requiring a briefing from ONCD on this issue in Fiscal Year 2023 appropriations legislation. Additionally, U.S. Senate Homeland Security and Governmental Affairs Committee Chairman Gary Peters (D-MI) and U.S. Senator Ron Wyden (D-OR) have highlighted their legislative efforts on memory safety to ONCD.
Read the full report, “Back to the Building Blocks: A Path Toward Secure and Measurable Software.”
Read our fact sheet here.
Read out statements of support from industry, academia, and civil society here.
Watch a video address from Director Coker and Assistant National Cyber Director for Technology Security Rajan outlining the challenges and solutions presented in the technical report here.
Article link: https://www.whitehouse.gov/oncd/briefing-room/2024/02/26/press-release-technical-report/

By NATALIE ALMSFEBRUARY 20, 2024
The agency announced a job analysis survey on Monday for feds doing AI-related work, but the federal HR shop is lagging on AI in Government Act of 2020 implementation.
The Office of Personnel Management is inching ahead on past-due requirements for artificial intelligence-related work mandated by Congress in the AI in Government Act of 2020, but still lags behind on others.
The agency is issuing an AI job analysis survey, it announced on Monday, to inform its development of a competency model, meant to help with the hiring, career development and performance management of federal employees doing AI work.
OPM said in a memo sent to agencies Monday that the survey is meant to validate the AI competencies the personnel agency has already identified.
“Developing a competency model for AI work is a key step towards ensuring federal agencies can attract, recruit, and hire skilled employees to accomplish artificial intelligence work,” the agency states in the memo.
The new OPM memo and survey help check off requirements to identify key skills and competencies for AI feds, OPM says, as required by the AI in Government Act of 2020.
However, OPM remains behind on other requirements mandated by that legislation, the 18-month deadline for which passed in July 2022. The Government Accountability Office has already dinged OPM for delays in a December report.
One of those requirements is for OPM to either create an occupational series for AI in government or update an existing one.
OPM told Nextgov/FCW that it intends to release more classification policy guidance for AI soon, although it will cover more than one position, as opposed to OPM setting up a single AI occupational series or updating an existing one.
OPM said in a response included in the GAO report that creation a single occupational series for AI in government is “not conducive to individual agency needs and missions” and that OPM will instead issue an AI position classification interpretive guide to help agencies identify AI work, figure out how to classify positions that require such work and qualify job-seekers for those roles.
The new memo to agencies also references that forthcoming guide, which OPM says will meet the requirements of the AI in Government Act of 2020.
That law also required that OPM estimate the number of AI employees by agency and put together two- and five-year forecasts for the number of feds working on AI in each agency. OPM told Nextgov/FCW that those estimates and forecasts have been completed, although GAO wrote in its report that OPM hadn’t provided documentation to show that it had done the estimation or the forecasts.
GAO also noted that the personnel agency hadn’t fulfilled requirements from a 2020 executive order to create an inventory of rotational programs to be used to expand the pool of AI experts in government and issue a report with recommendations for how agencies could leverage them to do so. However, OPM says it has since done the inventory and report.
All this is happening as the federal government attempts to rapidly staff up on AI-experienced employees talent as part of a “talent surge” sparked by an executive order issued last fall. The success of agencies in terms of reskilling feds and hiring such feds will be fundamental to the success of the AI executive order overall, experts have told Nextgov/FCW.
Agencies are already up against chronic problems in government hiring, like the struggle to match private sector wages in a competitive market or ensure that the experience for job seekers is smooth, despite the complicated government bureaucracy that dictates hiring.
Daniel Ho, a professor at Stanford Law School, senior fellow at the Stanford Institute for Human-Centered AI and director of the university’s RegLab, praised OPM’s actions since the executive order so far, including its authorization of direct hire for AI positions.
“But the puzzling delays and lack of transparency in responding to the AI in Government Act, requirements for which were due over a year and a half ago, reflect big structural challenges in federal hiring,” he told Nextgov/FCW via email. “Without a forecast and occupational series, the White House’s AI ‘talent surge’ could turn into ‘talent blip.'”
Editor’s note: This article has been updated to include OPM’s completion of the inventory and report.

Friday, Feb 23
The U.S. Department of Defense’s artificial intelligence office is enfeebled by a lack of appropriations from Congress and is having to scuttle some efforts to sustain others, its leader said.
“We have to cannibalize some things in order to be able to keep other things alive,” Craig Martell, the Defense Department’s chief digital and AI officer, or CDAO, told reporters Feb. 22.
Congress has yet to pass a full defense budget for fiscal year 2024, which began Oct. 1, even as the Biden administration readies its fiscal 2025 spending blueprint. At least 40 continuing resolutions, or stopgap funding bills, have been enacted since 2010.
That record of funding uncertainty hurts talent and training initiatives as well as what’s known as the AI scaffolding, or the virtual infrastructure that makes models usable, accurate and relevant to the military, Martell said.
“That kind of behavior, of being agile, sitting next to the operator, building and growing and building and changing and building and iterating, is very difficult, if not impossible, to do under the conditions of a continuing resolution,” he said on the sidelines of the Defense Data and AI Symposium in Washington.
The Defense Department sought $1.8 billion for AI in fiscal 2024. The department is juggling hundreds of AI-related projects, including some associated with major weapons systems such as the Joint Light Tactical Vehicle and the MQ-9 Unmanned Aerial Vehicle, the Government Accountability Office reported.
While the CDAO is relatively new — having been announced in 2021 and hitting its first full strides in 2022 — making the case for its existence and expenditures isn’t difficult, according to Martell, who previously worked on machine learning at Lyft and served as an associate chairman of computer science at the Naval Postgraduate School.
The public and private sectors are increasingly interested in AI and other pattern-recognition capabilities, and digital competition with China is steaming ahead. The Defense Department’s connectivity campaign known as Combined Joint All-Domain Command and Control hinges on data-and-analytics advancements made by the CDAO, as well.
Martell said he considers the budget struggles a normal part of the give and take in Washington.
“I don’t take the continuing resolutionto be a slight against us,” he said.







