healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

IBM Unveils watsonx.governance to Help Businesses & Governments Govern and Build Trust in Generative AI

Posted by timmreardon on 11/14/2023
Posted in: Uncategorized.

watsonx.governance helps organizations:

– Manage AI to meet upcoming safety and transparency regulations and policies worldwide – a “nutrition label” for AI

– Proactively detect and mitigate risk, monitoring for fairness, bias, drift, and new LLM metrics

– Manage, monitor, and govern AI models from IBM, open source communities, and other model providers

Nov 14, 2023

ARMONK, N.Y., Nov 14, 2023/PRNewswire/ — IBM (NYSE:IBM) today announced that watsonx.governance will be generally available in early December to help businesses shine a light on AI models and eliminate the mystery around the data going in, and the answers coming out.

While generative AI, powered by Large Language Models (LLM) or Foundation Models, offers many use cases for businesses, it also poses new risks and complexities, including training data scraped from corners of the internet that cannot be validated as fair and accurate, all the way to a lack of explainable outputs. Watsonx.governance provides organizations with the toolkit they need to manage risk, embrace transparency, and anticipate compliance with future AI-focused regulation.

As businesses today are looking to innovate with AI, deploying a mix of LLMs from tech providers and open sources communities, watsonxenables them to manage, monitor and govern models from wherever they choose.

“Company boards and CEOs are looking to reap the rewards from today’s more powerful AI models, but the risks due to a lack of transparency and inability to govern these models have been holding them back,” said Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. “Watsonx.governance is a one-stop-shop for businesses that are struggling to deploy and manage both LLM and ML models, giving businesses the tools, they need to automate AI governance processes, monitor their models, and take corrective action, all with increased visibility. Its ability to translate regulations into enforceable policies will only become more essential for enterprises as new AI regulation takes hold worldwide.”

IBM Consulting has also expanded their strategic expertise to help clients scale responsible AI with both automated model governance and organizational governance encompassing people, process and technology from IBM and strategic partners. IBM consultants have deep skills in establishing AI ethics boards, organizational culture and accountability, training, regulatory and risk management, and mitigating cybersecurity threats, all using human-centric design.

Watsonx.governance is one of three software products in the IBM watsonx AI and data platform, along with a set of AI assistants, designed to help enterprises scale and accelerate the impact of AI. The platform includes the watsonx.ai next-generation enterprise studio for AI builders and the watsonx.data open, hybrid, and governed data store. The company also recently announced intellectual property protection for its for IBM-developed watsonx models.

About IBM
IBM is a leading provider of global hybrid cloud and AI, and consulting expertise. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. More than 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and consulting deliver open and flexible options to our clients. All of this is backed by IBM’s long-standing commitment to trust, transparency, responsibility, inclusivity and service.

Visit www.ibm.com for more information.

Contacts:

Amy Angelini
alangeli@us.ibm.com

Zach Bishop
zachery.bishop@ibm.com

SOURCE IBM

Article link: https://newsroom.ibm.com/2023-11-14-IBM-Unveils-watsonx-governance-to-Help-Businesses-Governments-Govern-and-Build-Trust-in-Generative-AI?

DEPARTMENT OF DEFENSE Data, Analytics, and Artificial Intelligence Adoption Strategy.

Posted by timmreardon on 11/14/2023
Posted in: Uncategorized.

https://www.linkedin.com/posts/joweiler_dod-data-analytics-and-ai-adoption-strategy-activity-7130205134965370880-fwg0?

How Might AI Affect the Rise and Fall of Nations? – RAND

Posted by timmreardon on 11/12/2023
Posted in: Uncategorized.

by Barry Pavel, Ivana Ke, Michael Spirtas, James Ryseff, Lea Sabbag, Gregory Smith, Keller Scholl, Domenique Lumpkin

  • Related Topics: 
  • Artificial Intelligence, 
  • Emerging Technologies, 
  • Geopolitical Strategic Competition

Nations across the globe could see their power rise or fall depending on how they harness and manage the development of artificial intelligence (AI). Regardless of whether AI poses an existential risk to humanity, governments will need to develop new regulatory frameworks to identify, evaluate, and respond to the variety of AI-enabled challenges to come.

With the release of advanced forms of AI to the public early in 2023, public policy debates have rightly focused on such developments as the exacerbation of inequality, the loss of jobs, and the potential threat of human extinction if AI continues to evolve without effective guardrails. There has been less discussion about how AI might affect geopolitics and which actors might take the lead in the future development of AI or other advanced AI algorithms.

As AI continues to advance, geopolitics may never be the same. Humans organized in nation-states will have to work with another set of actors—AI-enabled machines—of equivalent or greater intelligence and, potentially, highly disruptive capabilities. In the age of geotechnopolitics, human identity and human perceptions of our roles in the world will be distinctly different; monumental scientific discoveries will emerge in ways that humans may not be able to comprehend. Consequently, the AI development path that ultimately unfolds will matter enormously for the shape and contours of the future world.

We outline several scenarios that illustrate how AI development could unfold in the near term, depending on who is in control. We held discussions with leading technologists, policymakers, and scholars spanning many sectors to generate our findings and recommendations. We presented these experts with the scenarios as a baseline to probe, reflect on, and critique. We sought to characterize the current trajectory of AI development and identify the most important factors for governing the evolution of this unprecedented technology.

Who could control the development of AI?

U.S. Companies Lead the Way

U.S. President Joe Biden appears virtually in a meeting with business and labor leaders about the Chips Act, which aims to spur U.S. domestic chip and semiconductor manufacturing

U.S. President Joe Biden appears virtually in a meeting with business and labor leaders about the CHIPS and Science Act, which aims to spur U.S. domestic chip and semiconductor manufacturing.

Photo by Jonathan Ernst/Reuters

The U.S. government continues to allow private corporations to develop AI without meaningfully regulating the technology or intervening in a way that changes those corporations’ behavior. This approach fits with the long-standing belief in the United States that the free market (and its profit-driven incentives) is the most effective mechanism to rapidly advance technologies like AI.[1] In this world, U.S. government personnel continue to lag behind engineers in the U.S. technology sector, both in their understanding of AI and in their ability to harness its power. Private corporations direct the investment of almost all research and development funding to improve AI, and the vast majority of U.S. technical talent continues to flock to Silicon Valley. The U.S. government seeks to achieve its policy goals by relying on the country’s innovators to develop new inventions that it could eventually purchase. In this world, the future relationship between the U.S. government and the technology sector looks much like the present: Companies engage in aggressive data-gathering of consumers, and social media continues to be a hotbed for disinformation and dissension.

U.S. Government Takeover

Top U.S. technology leaders including Tesla CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna and former Microsoft CEO Bill Gates take their seats for the start of a bipartisan Artificial Intelligence (AI) Insight Forum, in September 2023.

Top U.S. technology leaders, including Tesla chief executive officer (CEO) Elon Musk, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, and former Microsoft CEO Bill Gates take their seats for the start of a bipartisan AI Insight Forum, in September 2023. 

Photo by Leah Millis/Reuters

AI advances are proceeding at a rapid rate, and concerns about catastrophic consequences lead the U.S. government—potentially in coordination with like-minded allies—to seize control of AI development. The United States chooses to abandon its traditional light-handed approach to regulating information technology and software development and instead embarks on large-scale regulation and oversight. This results in a monopoly by the U.S. government and select partners (e.g., the United Kingdom) over AI computing resources, data centers, advanced algorithms, and talent through nationalization and comprehensive regulation. Similar to past U.S. government initiatives, such as the Apollo Program and the Manhattan Project, AI is developed under government authority rather than under the auspices of private companies. In the defense sector, this could lead to an arms race dynamic as other governments initiate AI development programs of their own for fear that they will be left behind in an AI-driven era. Across the instruments of power, such nationalization could also shift the balance of haves versus have-nots as other countries that fail to keep up with the transition see their economies suffer because of a lack of ability to develop AI and incorporate it into their workforces.

Chinese Surprise

A Baidu sign is seen at the 2023 World Artificial Intelligence Conference in Shanghai, China.

Photo by Aly Song/Reuters

Akin to a Sputnik moment, three Chinese organizations—Huawei, Baidu, and the Beijing Academy of Artificial Intelligence (BAAI)—announce a major AI breakthrough, taking the world by surprise. In this world, AI progress in China is initially overlooked and consistently downplayed by policymakers from advanced, democratic economies. Chinese companies, research institutes, and key government labs leapfrog ahead of foreign competitors, in part because of their improved ability to absorb vast amounts of government funding. State-of-the-art AI models also have been steadily advancing, leveraging a competitive data advantage from the country’s massive population. Finally, China’s military-civil fusion enables major actors across industry, academia, and government to share resources across a common AI infrastructure. BAAI benefits from strategic partnerships with industry leaders and access to significant computational resources, such as the Pengcheng Laboratory’s Pengcheng Cloudbrain-II. This combination of world-leading expertise and enormous computing power allows China to scale AI research at an unprecedented rate, leading to breakthroughs in transformative AI research that catch the world off guard. This leads to intense concerns from U.S. political and military leaders that China’s newfound AI capabilities will provide it with an asymmetric military advantage over the United States.

Great Power Public-Private Consortium

Canada's Prime Minister Justin Trudeau, Japan's Prime Minister Fumio Kishida, U.S. President Joe Biden, Germany's Chancellor Olaf Scholz, Indonesia's President Joko Widodo, Italy's Prime Minister Giorgia Meloni, Citigroup's CEO Jane Fraser and European Commission President Ursula confer during the 2023 G7 summit in Hiroshima, Japan

Canada’s Prime Minister Justin Trudeau, Japan’s Prime Minister Fumio Kishida, U.S. President Joe Biden, Germany’s Chancellor Olaf Scholz, Indonesia’s President Joko Widodo, Italy’s Prime Minister Giorgia Meloni, Citigroup’s CEO Jane Fraser, and European Commission President Ursula confer during the 2023 Group of Seven summit in Hiroshima, Japan.

Photo by Jonathan Ernst/Reuters

Across the world, robust partnerships among government, global industry, civil society organizations, academia, and research institutions support the rapid development and deployment of AI. These partnerships form a consortium that carries out multi-stakeholder project collaborations that access large-scale computational data and training, computing, and storage resources. Through funding from many governments, the consortium develops joint solutions and benchmarking efforts to evaluate, verify, and validate the trustworthiness, reliability, and robustness of AI systems. New and existing international government bodies, including the Abu Dhabi AI Council, rely on diverse participation and contributions to set standards for responsible AI use. The result is a healthy AI sector that supports economic growth that occurs concurrently with the development and evaluation of equitable, safe, and secure AI systems.

What have we learned?

Countries and companies will clash in new ways, and AI could become an actor, not just a factor

Countries and corporations have long competed for power. Big technology companies challenge governments in many of the old ways while adding new approaches to the mix. Similar to traditional multinational corporations, such companies reach across national boundaries; however, big tech companies also influence local communities in much more comprehensive and invasive ways because they touch consumers’ lives and gather data on their locations, activities, and habits. Big tech companies also affect national economies, domestic policy, and local politics in new ways because they influence the spread of information (and disinformation) and create new communities and subcultures. This has contributed to polarized populations in several countries and regime change in others. AI companies specifically enjoy access to vast investment funds and massive computing power, giving them additional advantages.

Although technology has often influenced geopolitics, the prospect of AI means that the technology itself could become a geopolitical actor. AI could have motives and objectives that differ considerably from those of governments and private companies. Humans’ inability to comprehend how AI “thinks” and our limited understanding of the second- and third-order effects of our commands or requests of AI are also very troubling. Humans have enough trouble interacting with one another. It remains to be seen how we will manage our relationships with one or more AIs.

We are entering an era of both enlightenment and chaos

The borderless nature of AI makes it hard to control or regulate. As computing power expands, models are optimized, and open-source frameworks mature, the ability to create highly impactful AI applications will become increasingly diffuse. In such a world, well-intentioned researchers and engineers will use this power to do wonderful things, ill-intentioned individuals will use it to do terrible things, and AIs could do both wonderful and terrible things. The net result is neither an unblemished era of enlightenment nor an unmitigated disaster, but a mix of both. Humanity will learn to muddle through and live with this game-changing technology, just as we have with so many other transformative technologies in the past.

The United States and China will lead in different ways

Although U.S. policymakers often worry that China will take the lead in a race for AI, the consensus from the experts with whom we consulted was that China is highly unlikely to produce a major new advance in AI because of U.S. superiority in empowering private sector innovation. China is unlikely to catch up to or surpass the United States in producing AI breakthroughs, but it could lead in different arenas. China’s demonstrated record of refining technological capabilities produced elsewhere and the Chinese Communist Party’s preferred approach of societal use cases mean that China could see greater success integrating today’s technologies into its workflows, weapons, and systems, whereas the United States could fall behind in end-to-end systems integration.

Technological innovation will continue to outpace traditional regulation

The U.S. government is already considered behind on understanding AI compared with leading private developers. This is due to several factors: the relative slowness of government policymaking compared with the fast pace of technology development, government’s inability to pay competitive salaries for scarce talent, and the lack of clarity on whether and how AI should be regulated at all, among others. Looking ahead, this dynamic is unlikely to change. Governments will continue playing catch-up with the private sector to understand and respond to the newest and most-capable AI developments.

The rapid pace and diffusion of advanced AI technology will make multilateral regulation difficult. AI lacks the chokepoints of traditional models of nonproliferation, such as the global nuclear nonproliferation regime, meaning that it will be comparatively difficult for governments to control AI using traditional regulatory techniques.

What should government policymakers do to protect humanity?

The potential dangers posed by AI are many. At the extreme, they include the threat of human extinction, which could come about by an AI-enabled catastrophe, such as a well-designed virus that spreads easily, evades detection, and destroys our civilization. Less dire, but considerably worrisome, is the threat to democratic governance if AIs gain power over people.[2] Even if AIs do not kill humans or overturn democracy, authoritarian regimes, terrorist groups, and organized crime groups could use AI to cause great harm by spreading disinformation and manipulating public opinion. Governments need to view the AI landscape as a regulatory training ground in preparation for the threats posed by even more-advanced AI capabilities, including the potential arrival of artificial general intelligence.

Governments should focus on strengthening resilience to AI threats

In addition to more-traditional regulatory practices, government policies on AI should focus on strategies of resilience to mitigate potential AI threats because strategies aimed solely at denial will not work. AI cannot be contained through regulation, so the best policy will aim to minimize the harm that AI might do. This will probably be most critical in biosecurity, [3] but harm reduction also includes countering cybersecurity threats, strengthening democratic resilience, and developing emergency response options for a wide variety of threats from state and sub- and non-state actors. Governments will either need to adopt entirely new capabilities to put this policy into action or expand existing agencies, such as the Cybersecurity and Infrastructure Security Agency. Governments should take a more comprehensive approach to regulation beyond hardware controls, which will not be enough to mitigate harms in the long run.

Governments should look beyond traditional regulatory techniques to influence AI developments

Unlike other potentially dangerous technologies, AI lacks obvious inputs that could be regulated and controlled. Data and computing power are widely available to companies large and small, and no single entity can reliably predict from where the next revolutionary AI advance might originate. Consequently, governments should consider expanding their toolboxes beyond traditional regulatory techniques. Two creative mechanisms could be for governments to invest in establishing robust, publicly owned data sets for AI research or issue challenge grants that encourage socially beneficial uses for AI. New techniques could also include creating uniform liability rules to clarify when developers will be liable for harms involving AI, requirements for how AI should be assessed, and controls on whether certain highly capable models can be proliferated. Ultimately, governments could buy a seat at the table by providing economic incentives to companies in exchange for more influence in ensuring that AI is used for the good of all.

Governments should continue support for innovation

U.S. superiority in AI is largely the result of its superiority in innovation. To maintain this lead, the U.S. government should continue to support innovation by funding national AI resources. Although AI development at the frontier is led by private-sector companies with vast computing resources, there is a belief among experts that the next AI breakthrough could stem from smaller models with novel architectures.[4] Academic institutions have led on many of the theoretical developments that made the existing generation of AI possible. Stimulating the academic community with modest resources could build off this legacy and result in significant AI improvements.

Governments should partner with the private sector to improve risk assessments

In light of the likely very widespread proliferation of advanced AI capabilities to private- and public-sector actors and well-resourced individuals, governments should work closely with leading private-sector entities to develop advanced forecasting tools, wargames, and strategic plans for dealing with what experts anticipate will be a wide variety of unexpected AI-enabled catastrophic events.

Article link: https://www.rand.org/pubs/perspectives/PEA3034-1.html?

Notes

  • [1] Tai Ming Cheung and Thomas G. Mahnken, The Decisive Decade: United States-China Competition in Defense Innovation and Defense Industrial Policy in and Beyond the 2020s, Center for Strategic and Budgetary Assessments, May 22, 2023.
  • [2] Yoshua Bengio, “AI and Catastrophic Risk,” Journal of Democracy, Vol. 34, No. 4, October 2023.
  • [3] Christopher Mouton, Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large Scale Biological Attacks: A Red-Team Approach, RAND Corporation, RR-A2977-1, 2023.
  • [4] Will Knight, “OpenAI’s CEO Says the Age of Giant AI Models Is Already Over,” Wired, April 17, 2023.

AI Academy – IBM

Posted by timmreardon on 11/12/2023
Posted in: Uncategorized.

https://www.ibm.com/think/ai-academy

We need to focus on the AI harms that already exist – MIT Technology Review

Posted by timmreardon on 11/12/2023
Posted in: Uncategorized.


Fears about potential future existential risk are blinding us to the fact AI systems are already hurting people here and now.

October 30, 2023

This is an excerpt from Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini, published on October 31 by Random House. It has been lightly edited. 

The term “x-risk” is used as a shorthand for the hypothetical existential risk posed by AI. While my research supports the idea that AI systems should not be integrated into weapons systems because of the lethal dangers, this isn’t because I believe AI systems by themselves pose an existential risk as superintelligent agents.

AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real. 

One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. 

I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.

Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. The Norwegian sociologist Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.

Given what my “Gender Shades” research revealed about algorithmic bias from some of the leading tech companies in the world, my concern is about the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that would also help create a future where the burdens of AI did not fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now. 

When I think of x-risk, I think of the people being harmed now and those who are at risk of harm from AI systems. I think about the risk and reality of being “excoded.” You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant-screening algorithm denies you access to housing. All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.

This is why my research cannot be confined just to industry insiders, AI researchers, or even well-meaning influencers. Yes, academic conferences are important venues. For many academics, presenting published papers is the capstone of a specific research exploration. For me, presenting “Gender Shades” at New York University was a launching pad. I felt motivated to put my research into action—beyond talking shop with AI practitioners, beyond the academic presentations, beyond private dinners. Reaching academics and industry insiders is simply not enough. We need to make sure everyday people at risk of experiencing AI harms are part of the fight for algorithmic justice.

AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real. 

One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. 

I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.

Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. The Norwegian sociologist Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.

Given what my “Gender Shades” research revealed about algorithmic bias from some of the leading tech companies in the world, my concern is about the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that would also help create a future where the burdens of AI did not fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now. 

When I think of x-risk, I think of the people being harmed now and those who are at risk of harm from AI systems. I think about the risk and reality of being “excoded.” You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant-screening algorithm denies you access to housing. All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.

This is why my research cannot be confined just to industry insiders, AI researchers, or even well-meaning influencers. Yes, academic conferences are important venues. For many academics, presenting published papers is the capstone of a specific research exploration. For me, presenting “Gender Shades” at New York University was a launching pad. I felt motivated to put my research into action—beyond talking shop with AI practitioners, beyond the academic presentations, beyond private dinners. Reaching academics and industry insiders is simply not enough. We need to make sure everyday people at risk of experiencing AI harms are part of the fight for algorithmic justice.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/10/30/1082656/focus-on-existing-ai-harms/amp/

Explained: Generative AI

Posted by timmreardon on 11/12/2023
Posted in: Uncategorized.

How do powerful generative AI systems like ChatGPT work, and what makes them different from other types of artificial intelligence?

Adam Zewe | MIT News

Publication Date: November 9, 2023

A quick scan of the headlines makes it seem like generative artificial intelligence is everywhere these days. In fact, some of those headlines may actually have been written by generative AI, like OpenAI’s ChatGPT, a chatbot that has demonstrated an uncanny ability to produce text that seems to have been written by a human.

But what do people really mean when they say “generative AI?”

Before the generative AI boom of the past few years, when people talked about AI, typically they were talking about machine-learning models that can learn to make a prediction based on data. For instance, such models are trained, using millions of examples, to predict whether a certain X-ray shows signs of a tumor or if a particular borrower is likely to default on a loan.

Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.

“When it comes to the actual machinery underlying generative AI and other types of AI, the distinctions can be a little bit blurry. Oftentimes, the same algorithms can be used for both,” says Phillip Isola, an associate professor of electrical engineering and computer science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

And despite the hype that came with the release of ChatGPT and its counterparts, the technology itself isn’t brand new. These powerful machine-learning models draw on research and computational advances that go back more than 50 years.

An increase in complexity

An early example of generative AI is a much simpler model known as a Markov chain. The technique is named for Andrey Markov, a Russian mathematician who in 1906 introduced this statistical method to model the behavior of random processes. In machine learning, Markov models have long been used for next-word prediction tasks, like the autocomplete function in an email program.

In text prediction, a Markov model generates the next word in a sentence by looking at the previous word or a few previous words. But because these simple models can only look back that far, they aren’t good at generating plausible text, says Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, who is also a member of CSAIL and the Institute for Data, Systems, and Society (IDSS).

“We were generating things way before the last decade, but the major distinction here is in terms of the complexity of objects we can generate and the scale at which we can train these models,” he explains.

Just a few years ago, researchers tended to focus on finding a machine-learning algorithm that makes the best use of a specific dataset. But that focus has shifted a bit, and many researchers are now using larger datasets, perhaps with hundreds of millions or even billions of data points, to train models that can achieve impressive results.

The base models underlying ChatGPT and similar systems work in much the same way as a Markov model. But one big difference is that ChatGPT is far larger and more complex, with billions of parameters. And it has been trained on an enormous amount of data — in this case, much of the publicly available text on the internet.

In this huge corpus of text, words and sentences appear in sequences with certain dependencies. This recurrence helps the model understand how to cut text into statistical chunks that have some predictability. It learns the patterns of these blocks of text and uses this knowledge to propose what might come next.

More powerful architectures

While bigger datasets are one catalyst that led to the generative AI boom, a variety of major research advances also led to more complex deep-learning architectures.

In 2014, a machine-learning architecture known as a generative adversarial network (GAN) was proposed by researchers at the University of Montreal. GANs use two models that work in tandem: One learns to generate a target output (like an image) and the other learns to discriminate true data from the generator’s output. The generator tries to fool the discriminator, and in the process learns to make more realistic outputs. The image generator StyleGAN is based on these types of models.  

Diffusion models were introduced a year later by researchers at Stanford University and the University of California at Berkeley. By iteratively refining their output, these models learn to generate new data samples that resemble samples in a training dataset, and have been used to create realistic-looking images. A diffusion model is at the heart of the text-to-image generation system Stable Diffusion.

In 2017, researchers at Google introduced the transformer architecture, which has been used to develop large language models, like those that power ChatGPT. In natural language processing, a transformer encodes each word in a corpus of text as a token and then generates an attention map, which captures each token’s relationships with all other tokens. This attention map helps the transformer understand context when it generates new text.

These are only a few of many approaches that can be used for generative AI.

A range of applications

What all of these approaches have in common is that they convert inputs into a set of tokens, which are numerical representations of chunks of data. As long as your data can be converted into this standard, token format, then in theory, you could apply these methods to generate new data that look similar.

“Your mileage might vary, depending on how noisy your data are and how difficult the signal is to extract, but it is really getting closer to the way a general-purpose CPU can take in any kind of data and start processing it in a unified way,” Isola says.

This opens up a huge array of applications for generative AI.

For instance, Isola’s group is using generative AI to create synthetic image data that could be used to train another intelligent system, such as by teaching a computer vision model how to recognize objects.

Jaakkola’s group is using generative AI to design novel protein structures or valid crystal structures that specify new materials. The same way a generative model learns the dependencies of language, if it’s shown crystal structures instead, it can learn the relationships that make structures stable and realizable, he explains.

But while generative models can achieve incredible results, they aren’t the best choice for all types of data. For tasks that involve making predictions on structured data, like the tabular data in a spreadsheet, generative AI models tend to be outperformed by traditional machine-learning methods, says Devavrat Shah, the Andrew and Erna Viterbi Professor in Electrical Engineering and Computer Science at MIT and a member of IDSS and of the Laboratory for Information and Decision Systems.

“The highest value they have, in my mind, is to become this terrific interface to machines that are human friendly. Previously, humans had to talk to machines in the language of machines to make things happen. Now, this interface has figured out how to talk to both humans and machines,” says Shah.

Raising red flags

Generative AI chatbots are now being used in call centers to field questions from human customers, but this application underscores one potential red flag of implementing these models — worker displacement.

In addition, generative AI can inherit and proliferate biases that exist in training data, or amplify hate speech and false statements. The models have the capacity to plagiarize, and can generate content that looks like it was produced by a specific human creator, raising potential copyright issues.

On the other side, Shah proposes that generative AI could empower artists, who could use generative tools to help them make creative content they might not otherwise have the means to produce.

In the future, he sees generative AI changing the economics in many disciplines.

One promising future direction Isola sees for generative AI is its use for fabrication. Instead of having a model make an image of a chair, perhaps it could generate a plan for a chair that could be produced.

He also sees future uses for generative AI systems in developing more generally intelligent AI agents.

“There are differences in how these models work and how we think the human brain works, but I think there are also similarities. We have the ability to think and dream in our heads, to come up with interesting ideas or plans, and I think generative AI is one of the tools that will empower agents to do that, as well,” Isola says.

Article link: https://news.mit.edu/2023/explained-generative-ai-1109

RELATED LINKS

  • Tommi Jaakkola
  • Phillip Isola
  • Devavrat Shah
  • Computer Science and Artificial Intelligence Laboratory
  • Laboratory for Information and Decision Systems
  • Institute for Data, Systems, and Society
  • Department of Electrical Engineering and Computer Science
  • School of Engineering
  • MIT Schwarzman College of Computing

The State of the Federal EHR (formerly FEHRM Industry Roundtable)

Posted by timmreardon on 11/12/2023
Posted in: Uncategorized.

Join the #FEHRM November 14 for The State of the Federal EHR (formerly FEHRM Industry Roundtable). The two-hour virtual event covers the current and future state of the federal electronic health record (EHR), health information technology, and health information exchange as well as highlights progress of the FEHRM and its federal partners to implement the #federalEHR and related capabilities. The meeting will feature discussions focused on achieving actionable insights to enhance the delivery of health care for Service members, Veterans and other beneficiaries. Learn more and register here before November 10: https://lnkd.in/gpCBKSwR.

Article link: https://www.linkedin.com/posts/fehrm_fehrm-federalehr-activity-7127789754833666048-ZZwb?

Exclusive: IBM debuts $500 million enterprise AI venture fund – Axios

Posted by timmreardon on 11/12/2023
Posted in: Uncategorized.

IBM is dedicating $500 million to invest in generative AI startups focused on business customers.

Why it matters: IBM is the latest tech giant to jump into the AI investing race with a bright sign advertising that its VC arm is open for business.

What they’re saying: “If you look at IBM over the last three years, we’re a dramatically different company that we were three years ago,” Rob Thomas, IBM senior vice president of software and chief commercial officer, told Axios, when asked why the company is announcing an allocation to investing in AI startups. 

  • That is: like other big companies, it’s telegraphing a message to startups — that it’s not the IBM of yesteryear. 

Details: The company plans to invest in startups across stages, with no set target number of annual investments or capital deployment timeline, Thomas said.

  • The company has already invested in Hugging Face’s recent Series D funding round, and in HiddenLayer’s Series A round.
  • IBM is particularly interested in startups focused on tools for specific verticals (like healthcare) or on a specific business process — as long as the startup doesn’t compete too much with IBM’s own businesses. 
  • IBM also says it will avoid investing in companies that are direct competitors to startups they’ve already backed.

Between the lines: “One thing that’s come up this year: everybody’s done a lot of experimentation but not a lot of ROI,” says Thomas of the feedback IBM has heard from its clients about AI tools so far. 

  • Naturally, he expects that more AI companies will shift to catering to business users as they seek sustainable business models. 

The intrigue: Thomas predicts that unlike a number of prior technological shifts, this one may be led by companies and markets outside the U.S., though he maintains that he looks for investments everywhere.

Article link: https://www.axios.com/2023/11/07/ibm-enterprise-ai-venture-fund

Go deeper: 

  • Corporate VCs ride AI startup wave
  • Salesforce revs its VC engine in the AI race

Happy Veterans Day

Posted by timmreardon on 11/11/2023
Posted in: Uncategorized.

OMB Releases Implementation Guidance Following President Biden’s Executive Order on Artificial Intelligence

Posted by timmreardon on 11/09/2023
Posted in: Uncategorized.

This week, President Biden signed a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As the United States takes action to realize the tremendous promise of AI while managing its risks, the federal government will lead by example and provide a model for the responsible use of the technology. As part of this commitment, today, ahead of the UK Safety Summit, Vice President Harris will announce that the Office of Management and Budget (OMB) is releasing for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This guidance would establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI.

Every day, the federal government makes decisions and takes actions that have profound impacts on the lives of Americans. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society. OMB’s proposed guidance builds on the Blueprint for an AI Bill of Rightsand the AI Risk Management Framework by mandating a set of minimum evaluation, monitoring, and risk mitigation practices derived from these frameworks and tailoring them to context of the federal government. In particular, the guidance provides direction to agencies across three pillars:

Strengthening AI Governance

To improve coordination, oversight, and leadership for AI, the draft guidance would direct federal departments and agencies to:

  • Designate Chief AI Officers, who would have the responsibility to advise agency leadership on AI, coordinate and track the agency’s AI activities, advance the use of AI in the agency’s mission, and oversee the management of AI risks.
  • Establish internal mechanisms for coordinating the efforts of the many existing officials responsible for issues related to AI. As part of this, large agencies would be required to establish AI Governance Boards, chaired by the Deputy Secretary or equivalent and vice-chaired by the Chief AI Officer.
  • Expand reporting on the ways agencies use AI, including providing additional detail on AI systems’ risks and how the agency is managing those risks.
  • Publish plans for the agency’s compliance with the guidance.

Advancing Responsible AI Innovation

To expand and improve the responsible application of AI to the agency’s mission, the draft guidance would direct federal agencies to:

  • Develop an agency AI strategy, covering areas for future investment as well as plans to improve the agency’s enterprise AI infrastructure, its AI workforce, its capacity to successfully develop and use AI, and its ability to govern AI and manage its risks.
  • Remove unnecessary barriers to the responsible use of AI, including those related to insufficient information technology infrastructure, inadequate data and sharing of data, gaps in the agency’s AI workforce and workforce practices, and cybersecurity approval processes that are poorly suited to AI systems.
  • Explore the use of generative AI in the agency, with adequate safeguards and oversight mechanisms.

Managing Risks from the Use of AI

To ensure that agencies establish safeguards for safety- and rights-impacting uses of AI and provide transparency to the public, the draft guidance would:

  • Mandate the implementation of specific safeguards for uses of AI that impact the rights and safety of the public. These safeguards include conducting AI impact assessments and independent evaluations; testing the AI in a real-world context; identifying and mitigating factors contributing to algorithmic discrimination and disparate impacts; monitoring deployed AI; sufficiently training AI operators; ensuring that AI advances equity, dignity, and fairness; consulting with affected groups and incorporating their feedback; notifying and consulting with the public about the use of AI and their plans to achieve consistency with the proposed policy; notifying individuals potentially harmed by a use of AI and offering avenues for remedy; and more.
  • Define uses of AI that are presumed to impact rights and safety, including many uses involved in health, education, employment, housing, federal benefits, law enforcement, immigration, child welfare, transportation, critical infrastructure, and safety and environmental controls.
  • Provide recommendations for managing risk in federal procurement of AI. After finalization of the proposed guidance, OMB will also develop a means to ensure that federal contracts align with its recommendations, as required by the Advancing American AI Act and President Biden’s AI Executive Order of October 30, 2023.

AI is already helping the government better serve the American people, including by improving health outcomes, addressing climate change, and protecting federal agencies from cyber threats. In 2023, federal agencies identified over 700 waysthey use AI to advance their missions, and this number is only likely to grow. When AI is used in agency functions, the public deserves assurance that the government will respect their rights and protect their safety.

Some examples of where AI has already been successfully deployed by the Federal government include:

  • Department of Health and Human Services, whereAI is used to predict infectious diseases and assist in preparing for potential pandemics, as well as anticipate and mitigate prescription drug shortages and supply chain issues. 
  • Department of Energy, whereAI is used to predict natural disasters and preemptively prepare for recoveries.
  • Department of Commerce, where AI is used to provide timely and actionable notifications to keep people safe from severe weather events.
  • National Aeronautics and Space Administration, whereAI is used to assist in the monitoring of Earth’s environment, which aids in safe execution of mission-planning.
  • Department of Homeland Security, whereAI is used to assist cyber forensic specialists to detect anomalies and potential threats in federal civilian networks.

The draft guidance takes a risk-based approach to managing AI harms to avoid unnecessary barriers to government innovation while ensuring that in higher-risk contexts, agencies follow a set of practices to strengthen protections for the public. AI is increasingly common in modern life, and not all uses of AI are equally risky. Many are benign, such as auto-correcting text messages and noise-cancelling headphones. By prioritizing safeguards for AI systems that pose risks to the rights and safety of the public—safeguards like AI impact assessments, real-world testing, independent evaluations, and public notification and consultation—the guidance would focus resources and attention on concrete harms, without imposing undue barriers to AI innovation.

This announcement is the latest step by the Biden-Harris Administration to advance the safe, secure, and trustworthy development and use of AI, and it is a major milestone for implementing President Biden’s AI Executive Order. The proposed guidance would establish the specific leadership, milestones, and transparency mechanisms to drive and track implementation of these practices. With the current rapid pace of technological development, bold leadership in AI is needed. With this draft guidance, the government is demonstrating that it can lead in AI and ensure that the technology benefits all.

Make your voice heard

To help ensure public trust in the applications of AI, OMB is soliciting public comment on the draft guidance until December 5th, 2023.

Learn more

Read the draft guidance: WH.gov

Submit a public comment: regulations.gov

See the full scope of AI actions from the Biden-Harris Administration: AI.gov

Quick guide on submitting public comments: Link to PDF

Article link: https://www.whitehouse.gov/omb/briefing-room/2023/11/01/omb-releases-implementation-guidance-following-president-bidens-executive-order-on-artificial-intelligence/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...