by Barry Pavel, Ivana Ke, Michael Spirtas, James Ryseff, Lea Sabbag, Gregory Smith, Keller Scholl, Domenique Lumpkin
- Related Topics:
- Artificial Intelligence,
- Emerging Technologies,
- Geopolitical Strategic Competition

Nations across the globe could see their power rise or fall depending on how they harness and manage the development of artificial intelligence (AI). Regardless of whether AI poses an existential risk to humanity, governments will need to develop new regulatory frameworks to identify, evaluate, and respond to the variety of AI-enabled challenges to come.
With the release of advanced forms of AI to the public early in 2023, public policy debates have rightly focused on such developments as the exacerbation of inequality, the loss of jobs, and the potential threat of human extinction if AI continues to evolve without effective guardrails. There has been less discussion about how AI might affect geopolitics and which actors might take the lead in the future development of AI or other advanced AI algorithms.
As AI continues to advance, geopolitics may never be the same. Humans organized in nation-states will have to work with another set of actors—AI-enabled machines—of equivalent or greater intelligence and, potentially, highly disruptive capabilities. In the age of geotechnopolitics, human identity and human perceptions of our roles in the world will be distinctly different; monumental scientific discoveries will emerge in ways that humans may not be able to comprehend. Consequently, the AI development path that ultimately unfolds will matter enormously for the shape and contours of the future world.
We outline several scenarios that illustrate how AI development could unfold in the near term, depending on who is in control. We held discussions with leading technologists, policymakers, and scholars spanning many sectors to generate our findings and recommendations. We presented these experts with the scenarios as a baseline to probe, reflect on, and critique. We sought to characterize the current trajectory of AI development and identify the most important factors for governing the evolution of this unprecedented technology.
Who could control the development of AI?
U.S. Companies Lead the Way

U.S. President Joe Biden appears virtually in a meeting with business and labor leaders about the CHIPS and Science Act, which aims to spur U.S. domestic chip and semiconductor manufacturing.
Photo by Jonathan Ernst/Reuters
The U.S. government continues to allow private corporations to develop AI without meaningfully regulating the technology or intervening in a way that changes those corporations’ behavior. This approach fits with the long-standing belief in the United States that the free market (and its profit-driven incentives) is the most effective mechanism to rapidly advance technologies like AI.[1] In this world, U.S. government personnel continue to lag behind engineers in the U.S. technology sector, both in their understanding of AI and in their ability to harness its power. Private corporations direct the investment of almost all research and development funding to improve AI, and the vast majority of U.S. technical talent continues to flock to Silicon Valley. The U.S. government seeks to achieve its policy goals by relying on the country’s innovators to develop new inventions that it could eventually purchase. In this world, the future relationship between the U.S. government and the technology sector looks much like the present: Companies engage in aggressive data-gathering of consumers, and social media continues to be a hotbed for disinformation and dissension.
U.S. Government Takeover

Top U.S. technology leaders, including Tesla chief executive officer (CEO) Elon Musk, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, and former Microsoft CEO Bill Gates take their seats for the start of a bipartisan AI Insight Forum, in September 2023.
Photo by Leah Millis/Reuters
AI advances are proceeding at a rapid rate, and concerns about catastrophic consequences lead the U.S. government—potentially in coordination with like-minded allies—to seize control of AI development. The United States chooses to abandon its traditional light-handed approach to regulating information technology and software development and instead embarks on large-scale regulation and oversight. This results in a monopoly by the U.S. government and select partners (e.g., the United Kingdom) over AI computing resources, data centers, advanced algorithms, and talent through nationalization and comprehensive regulation. Similar to past U.S. government initiatives, such as the Apollo Program and the Manhattan Project, AI is developed under government authority rather than under the auspices of private companies. In the defense sector, this could lead to an arms race dynamic as other governments initiate AI development programs of their own for fear that they will be left behind in an AI-driven era. Across the instruments of power, such nationalization could also shift the balance of haves versus have-nots as other countries that fail to keep up with the transition see their economies suffer because of a lack of ability to develop AI and incorporate it into their workforces.
Chinese Surprise

A Baidu sign is seen at the 2023 World Artificial Intelligence Conference in Shanghai, China.
Photo by Aly Song/Reuters
Akin to a Sputnik moment, three Chinese organizations—Huawei, Baidu, and the Beijing Academy of Artificial Intelligence (BAAI)—announce a major AI breakthrough, taking the world by surprise. In this world, AI progress in China is initially overlooked and consistently downplayed by policymakers from advanced, democratic economies. Chinese companies, research institutes, and key government labs leapfrog ahead of foreign competitors, in part because of their improved ability to absorb vast amounts of government funding. State-of-the-art AI models also have been steadily advancing, leveraging a competitive data advantage from the country’s massive population. Finally, China’s military-civil fusion enables major actors across industry, academia, and government to share resources across a common AI infrastructure. BAAI benefits from strategic partnerships with industry leaders and access to significant computational resources, such as the Pengcheng Laboratory’s Pengcheng Cloudbrain-II. This combination of world-leading expertise and enormous computing power allows China to scale AI research at an unprecedented rate, leading to breakthroughs in transformative AI research that catch the world off guard. This leads to intense concerns from U.S. political and military leaders that China’s newfound AI capabilities will provide it with an asymmetric military advantage over the United States.
Great Power Public-Private Consortium

Canada’s Prime Minister Justin Trudeau, Japan’s Prime Minister Fumio Kishida, U.S. President Joe Biden, Germany’s Chancellor Olaf Scholz, Indonesia’s President Joko Widodo, Italy’s Prime Minister Giorgia Meloni, Citigroup’s CEO Jane Fraser, and European Commission President Ursula confer during the 2023 Group of Seven summit in Hiroshima, Japan.
Photo by Jonathan Ernst/Reuters
Across the world, robust partnerships among government, global industry, civil society organizations, academia, and research institutions support the rapid development and deployment of AI. These partnerships form a consortium that carries out multi-stakeholder project collaborations that access large-scale computational data and training, computing, and storage resources. Through funding from many governments, the consortium develops joint solutions and benchmarking efforts to evaluate, verify, and validate the trustworthiness, reliability, and robustness of AI systems. New and existing international government bodies, including the Abu Dhabi AI Council, rely on diverse participation and contributions to set standards for responsible AI use. The result is a healthy AI sector that supports economic growth that occurs concurrently with the development and evaluation of equitable, safe, and secure AI systems.
What have we learned?
Countries and companies will clash in new ways, and AI could become an actor, not just a factor
Countries and corporations have long competed for power. Big technology companies challenge governments in many of the old ways while adding new approaches to the mix. Similar to traditional multinational corporations, such companies reach across national boundaries; however, big tech companies also influence local communities in much more comprehensive and invasive ways because they touch consumers’ lives and gather data on their locations, activities, and habits. Big tech companies also affect national economies, domestic policy, and local politics in new ways because they influence the spread of information (and disinformation) and create new communities and subcultures. This has contributed to polarized populations in several countries and regime change in others. AI companies specifically enjoy access to vast investment funds and massive computing power, giving them additional advantages.
Although technology has often influenced geopolitics, the prospect of AI means that the technology itself could become a geopolitical actor. AI could have motives and objectives that differ considerably from those of governments and private companies. Humans’ inability to comprehend how AI “thinks” and our limited understanding of the second- and third-order effects of our commands or requests of AI are also very troubling. Humans have enough trouble interacting with one another. It remains to be seen how we will manage our relationships with one or more AIs.
We are entering an era of both enlightenment and chaos
The borderless nature of AI makes it hard to control or regulate. As computing power expands, models are optimized, and open-source frameworks mature, the ability to create highly impactful AI applications will become increasingly diffuse. In such a world, well-intentioned researchers and engineers will use this power to do wonderful things, ill-intentioned individuals will use it to do terrible things, and AIs could do both wonderful and terrible things. The net result is neither an unblemished era of enlightenment nor an unmitigated disaster, but a mix of both. Humanity will learn to muddle through and live with this game-changing technology, just as we have with so many other transformative technologies in the past.
The United States and China will lead in different ways
Although U.S. policymakers often worry that China will take the lead in a race for AI, the consensus from the experts with whom we consulted was that China is highly unlikely to produce a major new advance in AI because of U.S. superiority in empowering private sector innovation. China is unlikely to catch up to or surpass the United States in producing AI breakthroughs, but it could lead in different arenas. China’s demonstrated record of refining technological capabilities produced elsewhere and the Chinese Communist Party’s preferred approach of societal use cases mean that China could see greater success integrating today’s technologies into its workflows, weapons, and systems, whereas the United States could fall behind in end-to-end systems integration.
Technological innovation will continue to outpace traditional regulation
The U.S. government is already considered behind on understanding AI compared with leading private developers. This is due to several factors: the relative slowness of government policymaking compared with the fast pace of technology development, government’s inability to pay competitive salaries for scarce talent, and the lack of clarity on whether and how AI should be regulated at all, among others. Looking ahead, this dynamic is unlikely to change. Governments will continue playing catch-up with the private sector to understand and respond to the newest and most-capable AI developments.
The rapid pace and diffusion of advanced AI technology will make multilateral regulation difficult. AI lacks the chokepoints of traditional models of nonproliferation, such as the global nuclear nonproliferation regime, meaning that it will be comparatively difficult for governments to control AI using traditional regulatory techniques.
What should government policymakers do to protect humanity?
The potential dangers posed by AI are many. At the extreme, they include the threat of human extinction, which could come about by an AI-enabled catastrophe, such as a well-designed virus that spreads easily, evades detection, and destroys our civilization. Less dire, but considerably worrisome, is the threat to democratic governance if AIs gain power over people.[2] Even if AIs do not kill humans or overturn democracy, authoritarian regimes, terrorist groups, and organized crime groups could use AI to cause great harm by spreading disinformation and manipulating public opinion. Governments need to view the AI landscape as a regulatory training ground in preparation for the threats posed by even more-advanced AI capabilities, including the potential arrival of artificial general intelligence.
Governments should focus on strengthening resilience to AI threats
In addition to more-traditional regulatory practices, government policies on AI should focus on strategies of resilience to mitigate potential AI threats because strategies aimed solely at denial will not work. AI cannot be contained through regulation, so the best policy will aim to minimize the harm that AI might do. This will probably be most critical in biosecurity, [3] but harm reduction also includes countering cybersecurity threats, strengthening democratic resilience, and developing emergency response options for a wide variety of threats from state and sub- and non-state actors. Governments will either need to adopt entirely new capabilities to put this policy into action or expand existing agencies, such as the Cybersecurity and Infrastructure Security Agency. Governments should take a more comprehensive approach to regulation beyond hardware controls, which will not be enough to mitigate harms in the long run.
Governments should look beyond traditional regulatory techniques to influence AI developments
Unlike other potentially dangerous technologies, AI lacks obvious inputs that could be regulated and controlled. Data and computing power are widely available to companies large and small, and no single entity can reliably predict from where the next revolutionary AI advance might originate. Consequently, governments should consider expanding their toolboxes beyond traditional regulatory techniques. Two creative mechanisms could be for governments to invest in establishing robust, publicly owned data sets for AI research or issue challenge grants that encourage socially beneficial uses for AI. New techniques could also include creating uniform liability rules to clarify when developers will be liable for harms involving AI, requirements for how AI should be assessed, and controls on whether certain highly capable models can be proliferated. Ultimately, governments could buy a seat at the table by providing economic incentives to companies in exchange for more influence in ensuring that AI is used for the good of all.
Governments should continue support for innovation
U.S. superiority in AI is largely the result of its superiority in innovation. To maintain this lead, the U.S. government should continue to support innovation by funding national AI resources. Although AI development at the frontier is led by private-sector companies with vast computing resources, there is a belief among experts that the next AI breakthrough could stem from smaller models with novel architectures.[4] Academic institutions have led on many of the theoretical developments that made the existing generation of AI possible. Stimulating the academic community with modest resources could build off this legacy and result in significant AI improvements.
Governments should partner with the private sector to improve risk assessments
In light of the likely very widespread proliferation of advanced AI capabilities to private- and public-sector actors and well-resourced individuals, governments should work closely with leading private-sector entities to develop advanced forecasting tools, wargames, and strategic plans for dealing with what experts anticipate will be a wide variety of unexpected AI-enabled catastrophic events.
Article link: https://www.rand.org/pubs/perspectives/PEA3034-1.html?
Notes
- [1] Tai Ming Cheung and Thomas G. Mahnken, The Decisive Decade: United States-China Competition in Defense Innovation and Defense Industrial Policy in and Beyond the 2020s, Center for Strategic and Budgetary Assessments, May 22, 2023.
- [2] Yoshua Bengio, “AI and Catastrophic Risk,” Journal of Democracy, Vol. 34, No. 4, October 2023.
- [3] Christopher Mouton, Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large Scale Biological Attacks: A Red-Team Approach, RAND Corporation, RR-A2977-1, 2023.
- [4] Will Knight, “OpenAI’s CEO Says the Age of Giant AI Models Is Already Over,” Wired, April 17, 2023.
Fears about potential future existential risk are blinding us to the fact AI systems are already hurting people here and now.
October 30, 2023

This is an excerpt from Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini, published on October 31 by Random House. It has been lightly edited.
The term “x-risk” is used as a shorthand for the hypothetical existential risk posed by AI. While my research supports the idea that AI systems should not be integrated into weapons systems because of the lethal dangers, this isn’t because I believe AI systems by themselves pose an existential risk as superintelligent agents.
AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real.
One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity.
I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.
Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. The Norwegian sociologist Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.
Given what my “Gender Shades” research revealed about algorithmic bias from some of the leading tech companies in the world, my concern is about the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that would also help create a future where the burdens of AI did not fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now.
When I think of x-risk, I think of the people being harmed now and those who are at risk of harm from AI systems. I think about the risk and reality of being “excoded.” You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant-screening algorithm denies you access to housing. All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.
This is why my research cannot be confined just to industry insiders, AI researchers, or even well-meaning influencers. Yes, academic conferences are important venues. For many academics, presenting published papers is the capstone of a specific research exploration. For me, presenting “Gender Shades” at New York University was a launching pad. I felt motivated to put my research into action—beyond talking shop with AI practitioners, beyond the academic presentations, beyond private dinners. Reaching academics and industry insiders is simply not enough. We need to make sure everyday people at risk of experiencing AI harms are part of the fight for algorithmic justice.
AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real.
One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity.
I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.
Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. The Norwegian sociologist Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.
Given what my “Gender Shades” research revealed about algorithmic bias from some of the leading tech companies in the world, my concern is about the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that would also help create a future where the burdens of AI did not fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now.
When I think of x-risk, I think of the people being harmed now and those who are at risk of harm from AI systems. I think about the risk and reality of being “excoded.” You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant-screening algorithm denies you access to housing. All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.
This is why my research cannot be confined just to industry insiders, AI researchers, or even well-meaning influencers. Yes, academic conferences are important venues. For many academics, presenting published papers is the capstone of a specific research exploration. For me, presenting “Gender Shades” at New York University was a launching pad. I felt motivated to put my research into action—beyond talking shop with AI practitioners, beyond the academic presentations, beyond private dinners. Reaching academics and industry insiders is simply not enough. We need to make sure everyday people at risk of experiencing AI harms are part of the fight for algorithmic justice.
How do powerful generative AI systems like ChatGPT work, and what makes them different from other types of artificial intelligence?
Adam Zewe | MIT News
Publication Date: November 9, 2023

A quick scan of the headlines makes it seem like generative artificial intelligence is everywhere these days. In fact, some of those headlines may actually have been written by generative AI, like OpenAI’s ChatGPT, a chatbot that has demonstrated an uncanny ability to produce text that seems to have been written by a human.
But what do people really mean when they say “generative AI?”
Before the generative AI boom of the past few years, when people talked about AI, typically they were talking about machine-learning models that can learn to make a prediction based on data. For instance, such models are trained, using millions of examples, to predict whether a certain X-ray shows signs of a tumor or if a particular borrower is likely to default on a loan.
Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.
“When it comes to the actual machinery underlying generative AI and other types of AI, the distinctions can be a little bit blurry. Oftentimes, the same algorithms can be used for both,” says Phillip Isola, an associate professor of electrical engineering and computer science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
And despite the hype that came with the release of ChatGPT and its counterparts, the technology itself isn’t brand new. These powerful machine-learning models draw on research and computational advances that go back more than 50 years.
An increase in complexity
An early example of generative AI is a much simpler model known as a Markov chain. The technique is named for Andrey Markov, a Russian mathematician who in 1906 introduced this statistical method to model the behavior of random processes. In machine learning, Markov models have long been used for next-word prediction tasks, like the autocomplete function in an email program.
In text prediction, a Markov model generates the next word in a sentence by looking at the previous word or a few previous words. But because these simple models can only look back that far, they aren’t good at generating plausible text, says Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, who is also a member of CSAIL and the Institute for Data, Systems, and Society (IDSS).
“We were generating things way before the last decade, but the major distinction here is in terms of the complexity of objects we can generate and the scale at which we can train these models,” he explains.
Just a few years ago, researchers tended to focus on finding a machine-learning algorithm that makes the best use of a specific dataset. But that focus has shifted a bit, and many researchers are now using larger datasets, perhaps with hundreds of millions or even billions of data points, to train models that can achieve impressive results.
The base models underlying ChatGPT and similar systems work in much the same way as a Markov model. But one big difference is that ChatGPT is far larger and more complex, with billions of parameters. And it has been trained on an enormous amount of data — in this case, much of the publicly available text on the internet.
In this huge corpus of text, words and sentences appear in sequences with certain dependencies. This recurrence helps the model understand how to cut text into statistical chunks that have some predictability. It learns the patterns of these blocks of text and uses this knowledge to propose what might come next.
More powerful architectures
While bigger datasets are one catalyst that led to the generative AI boom, a variety of major research advances also led to more complex deep-learning architectures.
In 2014, a machine-learning architecture known as a generative adversarial network (GAN) was proposed by researchers at the University of Montreal. GANs use two models that work in tandem: One learns to generate a target output (like an image) and the other learns to discriminate true data from the generator’s output. The generator tries to fool the discriminator, and in the process learns to make more realistic outputs. The image generator StyleGAN is based on these types of models.
Diffusion models were introduced a year later by researchers at Stanford University and the University of California at Berkeley. By iteratively refining their output, these models learn to generate new data samples that resemble samples in a training dataset, and have been used to create realistic-looking images. A diffusion model is at the heart of the text-to-image generation system Stable Diffusion.
In 2017, researchers at Google introduced the transformer architecture, which has been used to develop large language models, like those that power ChatGPT. In natural language processing, a transformer encodes each word in a corpus of text as a token and then generates an attention map, which captures each token’s relationships with all other tokens. This attention map helps the transformer understand context when it generates new text.
These are only a few of many approaches that can be used for generative AI.
A range of applications
What all of these approaches have in common is that they convert inputs into a set of tokens, which are numerical representations of chunks of data. As long as your data can be converted into this standard, token format, then in theory, you could apply these methods to generate new data that look similar.
“Your mileage might vary, depending on how noisy your data are and how difficult the signal is to extract, but it is really getting closer to the way a general-purpose CPU can take in any kind of data and start processing it in a unified way,” Isola says.
This opens up a huge array of applications for generative AI.
For instance, Isola’s group is using generative AI to create synthetic image data that could be used to train another intelligent system, such as by teaching a computer vision model how to recognize objects.
Jaakkola’s group is using generative AI to design novel protein structures or valid crystal structures that specify new materials. The same way a generative model learns the dependencies of language, if it’s shown crystal structures instead, it can learn the relationships that make structures stable and realizable, he explains.
But while generative models can achieve incredible results, they aren’t the best choice for all types of data. For tasks that involve making predictions on structured data, like the tabular data in a spreadsheet, generative AI models tend to be outperformed by traditional machine-learning methods, says Devavrat Shah, the Andrew and Erna Viterbi Professor in Electrical Engineering and Computer Science at MIT and a member of IDSS and of the Laboratory for Information and Decision Systems.
“The highest value they have, in my mind, is to become this terrific interface to machines that are human friendly. Previously, humans had to talk to machines in the language of machines to make things happen. Now, this interface has figured out how to talk to both humans and machines,” says Shah.
Raising red flags
Generative AI chatbots are now being used in call centers to field questions from human customers, but this application underscores one potential red flag of implementing these models — worker displacement.
In addition, generative AI can inherit and proliferate biases that exist in training data, or amplify hate speech and false statements. The models have the capacity to plagiarize, and can generate content that looks like it was produced by a specific human creator, raising potential copyright issues.
On the other side, Shah proposes that generative AI could empower artists, who could use generative tools to help them make creative content they might not otherwise have the means to produce.
In the future, he sees generative AI changing the economics in many disciplines.
One promising future direction Isola sees for generative AI is its use for fabrication. Instead of having a model make an image of a chair, perhaps it could generate a plan for a chair that could be produced.
He also sees future uses for generative AI systems in developing more generally intelligent AI agents.
“There are differences in how these models work and how we think the human brain works, but I think there are also similarities. We have the ability to think and dream in our heads, to come up with interesting ideas or plans, and I think generative AI is one of the tools that will empower agents to do that, as well,” Isola says.
Article link: https://news.mit.edu/2023/explained-generative-ai-1109
RELATED LINKS
- Tommi Jaakkola
- Phillip Isola
- Devavrat Shah
- Computer Science and Artificial Intelligence Laboratory
- Laboratory for Information and Decision Systems
- Institute for Data, Systems, and Society
- Department of Electrical Engineering and Computer Science
- School of Engineering
- MIT Schwarzman College of Computing

Join the #FEHRM November 14 for The State of the Federal EHR (formerly FEHRM Industry Roundtable). The two-hour virtual event covers the current and future state of the federal electronic health record (EHR), health information technology, and health information exchange as well as highlights progress of the FEHRM and its federal partners to implement the #federalEHR and related capabilities. The meeting will feature discussions focused on achieving actionable insights to enhance the delivery of health care for Service members, Veterans and other beneficiaries. Learn more and register here before November 10: https://lnkd.in/gpCBKSwR.
Article link: https://www.linkedin.com/posts/fehrm_fehrm-federalehr-activity-7127789754833666048-ZZwb?
IBM is dedicating $500 million to invest in generative AI startups focused on business customers.
Why it matters: IBM is the latest tech giant to jump into the AI investing race with a bright sign advertising that its VC arm is open for business.
What they’re saying: “If you look at IBM over the last three years, we’re a dramatically different company that we were three years ago,” Rob Thomas, IBM senior vice president of software and chief commercial officer, told Axios, when asked why the company is announcing an allocation to investing in AI startups.
- That is: like other big companies, it’s telegraphing a message to startups — that it’s not the IBM of yesteryear.
Details: The company plans to invest in startups across stages, with no set target number of annual investments or capital deployment timeline, Thomas said.
- The company has already invested in Hugging Face’s recent Series D funding round, and in HiddenLayer’s Series A round.
- IBM is particularly interested in startups focused on tools for specific verticals (like healthcare) or on a specific business process — as long as the startup doesn’t compete too much with IBM’s own businesses.
- IBM also says it will avoid investing in companies that are direct competitors to startups they’ve already backed.
Between the lines: “One thing that’s come up this year: everybody’s done a lot of experimentation but not a lot of ROI,” says Thomas of the feedback IBM has heard from its clients about AI tools so far.
- Naturally, he expects that more AI companies will shift to catering to business users as they seek sustainable business models.
The intrigue: Thomas predicts that unlike a number of prior technological shifts, this one may be led by companies and markets outside the U.S., though he maintains that he looks for investments everywhere.
Article link: https://www.axios.com/2023/11/07/ibm-enterprise-ai-venture-fund
Go deeper:


This week, President Biden signed a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As the United States takes action to realize the tremendous promise of AI while managing its risks, the federal government will lead by example and provide a model for the responsible use of the technology. As part of this commitment, today, ahead of the UK Safety Summit, Vice President Harris will announce that the Office of Management and Budget (OMB) is releasing for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This guidance would establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI.
Every day, the federal government makes decisions and takes actions that have profound impacts on the lives of Americans. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society. OMB’s proposed guidance builds on the Blueprint for an AI Bill of Rightsand the AI Risk Management Framework by mandating a set of minimum evaluation, monitoring, and risk mitigation practices derived from these frameworks and tailoring them to context of the federal government. In particular, the guidance provides direction to agencies across three pillars:
Strengthening AI Governance
To improve coordination, oversight, and leadership for AI, the draft guidance would direct federal departments and agencies to:
- Designate Chief AI Officers, who would have the responsibility to advise agency leadership on AI, coordinate and track the agency’s AI activities, advance the use of AI in the agency’s mission, and oversee the management of AI risks.
- Establish internal mechanisms for coordinating the efforts of the many existing officials responsible for issues related to AI. As part of this, large agencies would be required to establish AI Governance Boards, chaired by the Deputy Secretary or equivalent and vice-chaired by the Chief AI Officer.
- Expand reporting on the ways agencies use AI, including providing additional detail on AI systems’ risks and how the agency is managing those risks.
- Publish plans for the agency’s compliance with the guidance.
Advancing Responsible AI Innovation
To expand and improve the responsible application of AI to the agency’s mission, the draft guidance would direct federal agencies to:
- Develop an agency AI strategy, covering areas for future investment as well as plans to improve the agency’s enterprise AI infrastructure, its AI workforce, its capacity to successfully develop and use AI, and its ability to govern AI and manage its risks.
- Remove unnecessary barriers to the responsible use of AI, including those related to insufficient information technology infrastructure, inadequate data and sharing of data, gaps in the agency’s AI workforce and workforce practices, and cybersecurity approval processes that are poorly suited to AI systems.
- Explore the use of generative AI in the agency, with adequate safeguards and oversight mechanisms.
Managing Risks from the Use of AI
To ensure that agencies establish safeguards for safety- and rights-impacting uses of AI and provide transparency to the public, the draft guidance would:
- Mandate the implementation of specific safeguards for uses of AI that impact the rights and safety of the public. These safeguards include conducting AI impact assessments and independent evaluations; testing the AI in a real-world context; identifying and mitigating factors contributing to algorithmic discrimination and disparate impacts; monitoring deployed AI; sufficiently training AI operators; ensuring that AI advances equity, dignity, and fairness; consulting with affected groups and incorporating their feedback; notifying and consulting with the public about the use of AI and their plans to achieve consistency with the proposed policy; notifying individuals potentially harmed by a use of AI and offering avenues for remedy; and more.
- Define uses of AI that are presumed to impact rights and safety, including many uses involved in health, education, employment, housing, federal benefits, law enforcement, immigration, child welfare, transportation, critical infrastructure, and safety and environmental controls.
- Provide recommendations for managing risk in federal procurement of AI. After finalization of the proposed guidance, OMB will also develop a means to ensure that federal contracts align with its recommendations, as required by the Advancing American AI Act and President Biden’s AI Executive Order of October 30, 2023.
AI is already helping the government better serve the American people, including by improving health outcomes, addressing climate change, and protecting federal agencies from cyber threats. In 2023, federal agencies identified over 700 waysthey use AI to advance their missions, and this number is only likely to grow. When AI is used in agency functions, the public deserves assurance that the government will respect their rights and protect their safety.
Some examples of where AI has already been successfully deployed by the Federal government include:
- Department of Health and Human Services, whereAI is used to predict infectious diseases and assist in preparing for potential pandemics, as well as anticipate and mitigate prescription drug shortages and supply chain issues.
- Department of Energy, whereAI is used to predict natural disasters and preemptively prepare for recoveries.
- Department of Commerce, where AI is used to provide timely and actionable notifications to keep people safe from severe weather events.
- National Aeronautics and Space Administration, whereAI is used to assist in the monitoring of Earth’s environment, which aids in safe execution of mission-planning.
- Department of Homeland Security, whereAI is used to assist cyber forensic specialists to detect anomalies and potential threats in federal civilian networks.
The draft guidance takes a risk-based approach to managing AI harms to avoid unnecessary barriers to government innovation while ensuring that in higher-risk contexts, agencies follow a set of practices to strengthen protections for the public. AI is increasingly common in modern life, and not all uses of AI are equally risky. Many are benign, such as auto-correcting text messages and noise-cancelling headphones. By prioritizing safeguards for AI systems that pose risks to the rights and safety of the public—safeguards like AI impact assessments, real-world testing, independent evaluations, and public notification and consultation—the guidance would focus resources and attention on concrete harms, without imposing undue barriers to AI innovation.
This announcement is the latest step by the Biden-Harris Administration to advance the safe, secure, and trustworthy development and use of AI, and it is a major milestone for implementing President Biden’s AI Executive Order. The proposed guidance would establish the specific leadership, milestones, and transparency mechanisms to drive and track implementation of these practices. With the current rapid pace of technological development, bold leadership in AI is needed. With this draft guidance, the government is demonstrating that it can lead in AI and ensure that the technology benefits all.
Make your voice heard
To help ensure public trust in the applications of AI, OMB is soliciting public comment on the draft guidance until December 5th, 2023.
Learn more
Read the draft guidance: WH.gov
Submit a public comment: regulations.gov
See the full scope of AI actions from the Biden-Harris Administration: AI.gov
Quick guide on submitting public comments: Link to PDF

Task Force Lima continues to gain momentum across a variety of pursuits in its ambitious, 18-month plan to ensure the Pentagon can responsibly adopt, implement and secure powerful, still-maturing generative artificial intelligence technologies.
Department of Defense leadership formed that new hub in August within the Chief Digital and AI Office’s (CDAO) Algorithmic Warfare Directorate. Its ultimate mission is to set and steer the enterprise’s path forward with the emerging field of generative AI and associated large language models, which yield (convincing but not always correct) software code, images and other media following human prompts.
Such capabilities hold a lot of promise, but also complex challenges for the DOD — including many that remain unseen.
“Task Force Lima has three phases: the ‘learn phase,’ an ‘accelerate phase’ and a ‘guide phase.’ The ‘learn phase’ is where we are performing, for lack of a better word, inventories of what is the demand signal for generative AI across the department. That includes projects that are ongoing, to projects that we think should go forward, to projects that we would like to learn more about. And so, we submitted that as an inquiry to the department — and we’ve received a volume of use cases around 180 that go into many different categories and into many different mission areas,” Task Force Lima Mission Commander Navy Capt. M. Xavier Lugo told DefenseScoop.
In a recent interview, the 28-year Naval officer-turned AI acceleration lead, briefed DefenseScoop about what’s to come with those under-review use cases, a recent “Challenge Day,” and future opportunities and events the task force is planning.
180-plus instances
During his first interview with DefenseScoop back in late September, Lugo confirmed that the task force would be placing an explicit emphasis on enabling generative AI in “low-risk mission areas.”
“That is still the case. However, some of what has evolved from that is they’re not all theoretical. For some of these use cases, there are units that have already started working with those particular technologies and they’re integrating [them] into their workflows. That’s when we’re going to switch from the ‘learn phase’ into the ‘accelerate phase,’ which is where we will partner with the use cases that are ongoing,” Lugo told DefenseScoop in the most recent interview.
At a Pentagon press briefing about the state of AI last week, Deputy Defense Secretary Kathleen Hicks confirmed that the department launched Task Force Lima because it is “mindful of the potential risks and benefits offered by large language models” (LLMs) and other associated generative AI tools.
“Candidly, most commercially available systems enabled by large language models aren’t yet technically mature enough to comply with our DOD ethical AI principles — which is required for responsible operational use. But we have found over 180 instances where such generative AI tools could add value for us with oversight like helping to debug and develop software faster, speeding analysis of battle damage assessments, and verifiably summarizing texts from both open-source and classified datasets,” Hicks told reporters.
The deputy secretary noted that “not all of these use cases” that the task force is exploring are notional.
Some Defense Department components started looking at generative AI even before ChatGPT and similar products “captured the world’s attention,” she said. And a few department insiders have “even made their own models,” by isolating and fine-tuning foundational models for a specific task with clean, reliable and secure DOD data.
“While we have much more evaluating to do, it’s possible some might make fewer factual errors than publicly available tools — in part because, with effort, they can be designed to cite their sources clearly and proactively. Although it would be premature to call most of them operational, it’s true that some are actively being experimented with and even used as part of people’s regular workflows — of course, with appropriate human supervision and judgment — not just to validate, but also to continue improving them,” Hicks said.
Lugo offered an example of those more non-theoretical generative AI use cases that have already been maturing within DOD.
“As you can imagine, the military has a lot of policies and publications, [tactics, techniques, and procedures, or TTPs], and all sorts of documentation out there for particular areas — let’s say in the human resources area, for example. So, one of those projects would be how do I interact with all those publications and policies that are out there to answer questions that a particular person may have on how to do a procedure or a policy?” he told DefenseScoop.
Among its many responsibilities, one that the CDAO leadership has charged Task Force Lima with is coming up with acceptability criteria and a maturity model for each use case or groups of use cases encompassing generative AI.
“So, if we say we need an acceptability criteria of a particular value for a capability of summarization for LLMs, let’s say just as an example, then we need a model that matches that and that has that type of maturity in that particular capability. This is analogous to the self-driving vehicle maturity models and how you can have a different level of maturity in a self-driving vehicle for different road conditions. So, in our case the road conditions will be our acceptability criteria, and the model being able to meet that acceptability will be that maturity model,” Lugo explained.
‘Put me in, coach!’
Soon, the Lima team will start collecting information needed to inform its specific deliverables, including new test-and-evaluation frameworks, mitigation techniques, risk assessments and more.
“That output that we get during the ‘accelerate phase’ will be the input for the ‘guide phase,’ which is our last phase where we compile the deliverables to the CDAO Council so they can then make a determination into policy,” Lugo explained.
The task force does not have authority to officially publish guidance on generative AI deployments in DOD, but members previously made recommendations to the CDAO’s leadership that were approved to advise defense components in their efforts. The task force drafted that interim LLM guidance, but due to its classification level it has not been disseminated widely.
“That guidance [included that] any service can publish its own guidance that is more restrictive than the one that [the Office of the Secretary of Defense] publishes,” Lugo said.
The Navy offered its version of interim guardrails on generative AI and LLMs in September. Shortly after that, the Space Force transmitted a memo that put a temporary pause on guardians’ use of web-based generative AI tools like ChatGPT for its workforce — specifically citing data security concerns.
“Did I learn about the Space Force guidance before it went out? Yes. Would I have had any reason to try to modify that? No,” Lugo told DefenseScoop.
“Space Force — like any other service — has the right to pursue guidance that is even more restrictive than the guidance that is provided by the policy. So, I just want to be clear that they have autonomy to publish their own guidance. At Task Force Lima, we are coordinating with the services — and they understand our viewpoints, and we understand our viewpoints, and there is no conflict on viewpoints here,” he added.
And although it might make sense for one military branch to ban certain uses on a non-permanent basis to address data and security concerns, Lugo noted that doesn’t mean the task force should not be cautiously experimenting with models that are publicly accessible, in order to learn more about them.
In his latest interview with DefenseScoop, the task force chief also stated that his team is “not trying to do this in a vacuum.”
“We are definitely not only working with DOD, but we are working with industry and academia — and actually any organization that is interested in generative AI, they can reach out to us. There’s plenty of work, and there’s plenty of areas of involvement,” Lugo said.
“Also, I want to make sure that just because we are interacting with industry, that doesn’t take us out of the industry-agnostic, technology-agnostic hat. I am always ensuring that we keep that, because that’s what keeps us as an honest broker of this technology,” he added.
Lugo’s currently leading a core team of roughly 15 personnel. But he’s also engaging with a still-growing “expanded team” of close to 500 points of contacts associated with the task force’s activities and aims. To him, those officials are essentially on secondary duty, or a support function to his unit.
“We’re getting more people interested. Now, those 500 people — I’ve got everything from people watching from the bleachers, to personnel saying, ‘Hey, put me in coach!’ So, I’ve got a broad spectrum,” Lugo said.
Nearly 250 people attended a recent “Challenge Day” that the CDAO hosted to connect with industry and academic partners about the challenges associated with implementing generative AI within the DOD.
“There’s a lot of interest in the area, but there’s not that many companies in it. So what we saw was that it’s not just the normal names that you would hear on a day to day basis — but there’s also a lot of companies interested in integrating models. There’s companies that are not necessarily known for LLMs or generative AI, but they are known for other types of integration in the data space and in the AI space. So that was good, because that means that there’s a good pool of talent that will be working on the challenges that we have submitted to industry,” Lugo said.
According to Lugo, the cadre has received more than 120 responses to the recent request for informationreleased to the public to garner input on existing generative AI use cases and critical technical obstacles that accompany its emergence.
The RFI is about learning “what are the insights out there, what are the approaches to solving these particular challenges that we have. And as we compile that information, we will then go ahead and do a more formal solicitation through the proper processes,” he said.
On Nov. 30, industry and academic partners will have an additional opportunity this year to meet with Task Force Lima at the CDAO Industry Day. And down the line during the CDAO’s first in-person symposium — which is set to take place Feb. 20-22 in Washington — an entire track will be dedicated to Task Force Lima and generative AI.
Attendee registration opened in October, and the office is now accepting submissions for potential speakers at that event.
“I’m very optimistic that the challenges that we have submitted will be addressed — and hopefully corrected — by some innovative techniques,” Lugo told DefenseScoop.
Article link: https://www.linkedin.com/posts/defensescoop_inside-task-force-limas-exploration-of-180-activity-7127423756221718528-byMQ?

