healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

What’s next in chips – MIT Technology Review

Posted by timmreardon on 05/15/2024
Posted in: Uncategorized.

How Big Tech, startups, AI devices, and trade wars will transform the way chips are made and the technologies they power.

By James O’Donnellarchive page

    May 13, 2024

    MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

    Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. There is heightened demand for chips that can train AI models faster and ping them from devices like smartphones and satellites, enabling us to use these models without disclosing private data. Governments, tech giants, and startups alike are racing to carve out their slices of the growing semiconductor pie. 

    Here are four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they’ll unlock.

    CHIPS Acts around the world

    On the outskirts of Phoenix, two of the world’s largest chip manufacturers, TSMC and Intel, are racing to construct campuses in the desert that they hope will become the seats of American chipmaking prowess. One thing the efforts have in common is their funding: in March, President Joe Biden announced $8.5 billion in direct federal funds and $11 billion in loans for Intel’s expansions around the country. Weeks later, another $6.6 billion was announced for TSMC. 

    The awards are just a portion of the US subsidies pouring into the chips industry via the $280 billion CHIPS and Science Act signed in 2022. The money means that any company with a foot in the semiconductor ecosystem is analyzing how to restructure its supply chains to benefit from the cash. While much of the money aims to boost American chip manufacturing, there’s room for other players to apply, from equipment makers to niche materials startups.

    But the US is not the only country trying to onshore some of the chipmaking supply chain. Japan is spending $13 billion on its own equivalent to the CHIPS Act, Europe will be spending more than $47 billion, and earlier this year India announced a $15 billion effort to build local chip plants. The roots of this trend go all the way back to 2014, says Chris Miller, a professor at Tufts University and author of Chip War: The Fight for the World’s Most Critical Technology. That’s when China started offering massive subsidies to its chipmakers. 

    “This created a dynamic in which other governments concluded they had no choice but to offer incentives or see firms shift manufacturing to China,” he says. That threat, coupled with the surge in AI, has led Western governments to fund alternatives. In the next year, this might have a snowball effect, with even more countries starting their own programs for fear of being left behind.

    The money is unlikely to lead to brand-new chip competitors or fundamentally restructure who the biggest chip players are, Miller says. Instead, it will mostly incentivize dominant players like TSMC to establish roots in multiple countries. But funding alone won’t be enough to do that quickly—TSMC’s effort to build plants in Arizona has been mired in missed deadlines and labor disputes, and Intel has similarly failed to meet its promised deadlines. And it’s unclear whether, whenever the plants do come online, their equipment and labor force will be capable of the same level of advanced chipmaking that the companies maintain abroad.

    “The supply chain will only shift slowly, over years and decades,” Miller says. “But it is shifting.”

    More AI on the edge

    Currently, most of our interactions with AI models like ChatGPT are done via the cloud. That means that when you ask GPT to pick out an outfit (or to be your boyfriend), your request pings OpenAI’s servers, prompting the model housed there to process it and draw conclusions (known as “inference”) before a response is sent back to you. Relying on the cloud has some drawbacks: it requires internet access, for one, and it also means some of your data is shared with the model maker.  

    That’s why there’s been a lot of interest and investment in edge computing for AI, where the process of pinging the AI model happens directly on your device, like a laptop or smartphone. With the industry increasingly working toward a future in which AI models know a lot about us (Sam Altman described his killer AI app to me as one that knows “absolutely everything about my whole life, every email, every conversation I’ve ever had”), there’s a demand for faster “edge” chips that can run models without sharing private data. These chips face different constraints from the ones in data centers: they typically have to be smaller, cheaper, and more energy efficient. 

    The US Department of Defense is funding a lot of research into fast, private edge computing. In March, its research wing, the Defense Advanced Research Projects Agency (DARPA), announced a partnership with chipmaker EnCharge AI to create an ultra-powerful edge computing chip used for AI inference. EnCharge AI is working to make a chip that enables enhanced privacy but can also operate on very little power. This will make it suitable for military applications like satellites and off-grid surveillance equipment. The company expects to ship the chips in 2025.

    AI models will always rely on the cloud for some applications, but new investment and interest in improving edge computing could bring faster chips, and therefore more AI, to our everyday devices. If edge chips get small and cheap enough, we’re likely to see even more AI-driven “smart devices” in our homes and workplaces. Today, AI models are mostly constrained to data centers.

    “A lot of the challenges that we see in the data center will be overcome,” says EnCharge AI cofounder Naveen Verma. “I expect to see a big focus on the edge. I think it’s going to be critical to getting AI at scale.”

    Big Tech enters the chipmaking fray

    In industries ranging from fast fashion to lawn care, companies are paying exorbitant amounts in computing costs to create and train AI models for their businesses. Examples include models that employees can use to scan and summarize documents, as well as externally facing technologies like virtual agents that can walk you through how to repair your broken fridge. That means demand for cloud computing to train those models is through the roof. 

    The companies providing the bulk of that computing power are Amazon, Microsoft, and Google. For years these tech giants have dreamed of increasing their profit margins by making chips for their data centers in-house rather than buying from companies like Nvidia, a giant with a near monopoly on the most advanced AI training chips and a value larger than the GDP of 183 countries. 

    Amazon started its effort in 2015, acquiring startup Annapurna Labs. Google moved next in 2018 with its own chips called TPUs. Microsoft launched its first AI chips in November, and Meta unveiled a new version of its own AI training chips in April.

    That trend could tilt the scales away from Nvidia. But Nvidia doesn’t only play the role of rival in the eyes of Big Tech: regardless of their own in-house efforts, cloud giants still need its chips for their data centers. That’s partly because their own chipmaking efforts can’t fulfill all their needs, but it’s also because their customers expect to be able to use top-of-the-line Nvidia chips.

    “This is really about giving the customers the choice,” says Rani Borkar, who leads hardware efforts at Microsoft Azure. She says she can’t envision a future in which Microsoft supplies all chips for its cloud services: “We will continue our strong partnerships and deploy chips from all the silicon partners that we work with.”

    As cloud computing giants attempt to poach a bit of market share away from chipmakers, Nvidia is also attempting the converse. Last year the company started its own cloud service so customers can bypass Amazon, Google, or Microsoft and get computing time on Nvidia chips directly. As this dramatic struggle over market share unfolds, the coming year will be about whether customers see Big Tech’s chips as akin to Nvidia’s most advanced chips, or more like their little cousins. 

    Nvidia battles the startups 

    Despite Nvidia’s dominance, there is a wave of investment flowing toward startups that aim to outcompete it in certain slices of the chip market of the future. Those startups all promise faster AI training, but they have different ideas about which flashy computing technology will get them there, from quantum to photonics to reversible computation. 

    But Murat Onen, the 28-year-old founder of one such chip startup, Eva, which he spun out of his PhD work at MIT, is blunt about what it’s like to start a chip company right now.

    “The king of the hill is Nvidia, and that’s the world that we live in,” he says.

    Many of these companies, like SambaNova, Cerebras, and Graphcore, are trying to change the underlying architecture of chips. Imagine an AI accelerator chip as constantly having to shuffle data back and forth between different areas: a piece of information is stored in the memory zone but must move to the processing zone, where a calculation is made, and then be stored back to the memory zone for safekeeping. All that takes time and energy. 

    Related Story

    What’s next for generative video

    OpenAI’s Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what’s coming.

    Making that process more efficient would deliver faster and cheaper AI training to customers, but only if the chipmaker has good enough software to allow the AI training company to seamlessly transition to the new chip. If the software transition is too clunky, model makers such as OpenAI, Anthropic, and Mistral are likely to stick with big-name chipmakers.That means companies taking this approach, like SambaNova, are spending a lot of their time not just on chip design but on software design too.

    Onen is proposing changes one level deeper. Instead of traditional transistors, which have delivered greater efficiency over decades by getting smaller and smaller, he’s using a new component called a proton-gated transistor that he says Eva designed specifically for the mathematical needs of AI training. It allows devices to store and process data in the same place, saving time and computing energy. The idea of using such a component for AI inference dates back to the 1960s, but researchers could never figure out how to use it for AI training, in part because of a materials roadblock—it requires a material that can, among other qualities, precisely control conductivity at room temperature. 

    One day in the lab, “through optimizing these numbers, and getting very lucky, we got the material that we wanted,” Onen says. “All of a sudden, the device is not a science fair project.” That raised the possibility of using such a component at scale. After months of working to confirm that the data was correct, he founded Eva, and the work was published in Science.

    But in a sector where so many founders have promised—and failed—to topple the dominance of the leading chipmakers, Onen frankly admits that it will be years before he’ll know if the design works as intended and if manufacturers will agree to produce it. Leading a company through that uncertainty, he says, requires flexibility and an appetite for skepticism from others.

    “I think sometimes people feel too attached to their ideas, and then kind of feel insecure that if this goes away there won’t be anything next,” he says. “I don’t think I feel that way. I’m still looking for people to challenge us and say this is wrong.”

    Article link: https://www.technologyreview.com/2024/05/13/1092319/whats-next-in-chips/

    DISA unveils strategic plan for next five years

    Posted by timmreardon on 05/03/2024
    Posted in: Uncategorized.

    The document provides a series of strategic and operational imperatives as well as eight goals the agency seeks to achieve by 2030.

    BYMARK POMERLEAU

    MAY 1, 2024

    The Defense Information Systems Agency, charged with operating and maintaining the Department of Defense’s network, unveiled on Wednesday its strategic plan that articulates the organization’s goals over the next five years.

    Building upon the previous plan, released in 2022, the DISA Next Strategy, as it’s called, seeks to align the combat support agency with the 2022 National Defense Strategy and five-year budgeting process to help department leadership and industry partners make more informed decisions for allocating resources.

    The National Defense Strategy articulates a highly dynamic security environment with a multitude of simultaneous threats across the globe — from sophisticated nation-states seeking to undermine U.S. interests and power to non-state actors that still aim to cause disruption. Chief among those threats is China, referred to as the pacing threat.

    “As a combat support agency and the premier IT service provider for the Department of Defense, we will continue to provide world-class services. At the same time, we are changing. We are re-organizing, optimizing and transforming to deliver resilient, survivable and secure capabilities to enable department success and warfighter lethality,” Lt. Gen. Robert Skinner, DISA’s director, said in the forward to the strategy.

    “I am confident the agency will succeed – we have no other choice. We are the combat support agency entrusted with connecting senior leaders and warfighters across the globe 24/7 – often during the most stressful and dangerous moments of their lives. We must continue to deliver while being challenged by great powers. From great power competition with the People’s Republic of China, to supporting operations in emerging geographic areas of national strategic importance, both home and abroad,” he added.

    Within this enhanced security environment, simplicity and speed will be paramount. Thus, priority number one in DISA’s new strategy is the need to simplify the network globally with large-scale adoption of command IT environments. The plan is to consolidate combatant commands and defense agencies and field activities into this environment, serving as a key first step in providing a DOD-wide warfighting information system, Skinner wrote.

    “Communicating our strategy enables a more focused effort across the DOD and ensures that we have addressed the unique challenges of the [combatant commands], their multitude of mission sets, their warfighting functions and account for any domain-specific equities. We seek to avoid unnecessary duplication of capabilities between the CCMDs, DAFAs and military departments,” the strategy states. “We must capitalize on opportunities to simplify the IT environment, experiment with emerging technologies and test our solutions in the environments in which the Joint and Coalition Forces operate. We seek to partner with industry and academia to shape IT innovation towards solving our information system challenges.”

    Furthermore, the plan aims to develop a fully functional enterprise cloud environment and integrate identity, credential and access management (ICAM) and zero-trust capabilities with this common IT and cloud environment.

    The document lists four strategic imperatives that are overarching and important functions the agency must perform, each consisting of more specific operational imperatives. They include:

    • Operate and secure the DISA portion of the DOD Information Network
    • Support strategic command, control and communications
    • Optimize the network
    • Operationalize data

    Additionally, the plan outlines eight goals that provide areas of transformation the agency is focused on over the next five years. They include:

    • The defense information system network: By 2030 DISA has a globally accessible, software defined, transport environment that is unconstrained by bandwidth and impervious to denial or disruption.
    • Hybrid cloud environment: By 2030 DISA is operating a resilient, globally accessible hybrid cloud environment.
    • National leadership command capabilities: By 2030 DISA has modernized its portion of the NLCC fabric to enable national leadership and strategic coordination between allies and partners.
    • Joint and coalition warfighting tools: By 2030 DISA has delivered the right suite of capabilities to enable joint and coalition warfighting and has produced data standards for interoperability of IT solutions.
    • Consolidated network: By 2030 DISA has consolidated DAFAs and CCMDs into a common IT environment that offers seamless access to information at all classification levels.
    • Zero-trust tools: By the fourth quarter of fiscal 2027 DISA’s portion of the DODIN complies with the ZT reference architecture.
    • Data management: By 2030 DISA has a modern data platform for its defensive cyber and network operations data and has implemented standards for data management.
    • Workforce: By 2030 DISA will continue to upskill its workforce to remain “lethal” in the IT environment.

    Article link: https://defensescoop.com/2024/05/01/disa-next-unveil-strategic-plan-2025-2029/

    An AI startup made a hyperrealistic deepfake that’s so good it’s scary – MIT Technology Review 

    Posted by timmreardon on 04/30/2024
    Posted in: Uncategorized.

    In the past, AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. But now, new technology and in-depth data-gathering processes are generating shockingly realistic deepfake videos. MIT Technology Review’s AI reporter, Melissa Heikkilä, saw firsthand just how believable some deepfakes have become. 

    She allowed an AI startup, Synthesia, to create deepfake videos of her. The final products were so good that even she thought it was really her at first. In this edition of What’s Next in Tech, learn how Synthesia gathered the data necessary to create these videos, and what they suggest about a future in which it’s more and more challenging to figure out what’s real and what’s fake.

    What is technology’s role in constructing the future? In the latest issue of MIT Technology Review, we explore how societal, commercial, and cultural factors determine what gets built, how it’s used, and who benefits. To read the full issue and gain access to expert insights and big picture perspectives on the technology topics that matter most to you, subscribe today.

    Synthesia’s new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

    I’m stressed and running late, because what do you wear for the rest of eternity? 

    This makes it sound like I’m dying, but it’s the opposite. I am, in a way, about to live forever, thanks to the AI video startup Synthesia. For the past several years, the company has produced AI-generated avatars, but it has now launched a new generation, its first to take advantage of the latest advancements in generative AI, and they are more realistic and expressive than anything I’ve ever seen. While the release means almost anyone will now be able to make a digital double, on this early April afternoon, before the technology goes public, they’ve agreed to make one of me. 

    When I finally arrive at the company’s stylish studio in East London, I am greeted by Tosin Oshinyemi, the company’s production lead. He is going to guide and direct me through the data collection process—and by “data collection,” I mean the capture of my facial features, mannerisms, and more—much like he normally does for actors and Synthesia’s customers. 

    In AI research, there is a saying: Garbage in, garbage out. If the data that went into training an AI model is trash, that will be reflected in the outputs of the model. The more data points the AI model has captured of my facial movements, microexpressions, head tilts, blinks, shrugs, and hand waves, the more realistic the avatar will be. 

    In the studio, I’m trying really hard not to be garbage. 

    I am standing in front of a green screen and Oshinyemi guides me through the initial calibration process, where I have to move my head and then eyes in a circular motion. Apparently, this will allow the system to understand my natural colors and facial features. I am then asked to say the sentence “All the boys ate a fish,” which will capture all the mouth movements needed to form vowels and consonants. We also film footage of me “idling” in silence.

    He then asks me to read a script for a fictitious YouTuber in different tones, directing me on the spectrum of emotions I should convey. First I’m supposed to read it in a neutral, informative way, then in an encouraging way, an annoyed and complain-y way, and finally an excited, convincing way. 

    We film several takes featuring different variations of the script. In some versions I’m allowed to move my hands around. In others, Oshinyemi asks me to hold a metal pin between my fingers as I do. This is to test the “edges” of the technology’s capabilities when it comes to communicating with hands, Oshinyemi says. 

    Between takes, the makeup artist comes in and does some touch-ups to make sure I look the same in every shot. I can feel myself blushing because of the lights in the studio, but also because of the acting. After the team has collected all the shots it needs to capture my facial expressions, I go downstairs to read more text aloud for voice samples. 

    This process is very different from the way many AI avatars, deepfakes, or synthetic media—whatever you want to call them—are created. Read the story to learn more about the process and watch the final deepfake videos.

    Article link: https://www.linkedin.com/pulse/ai-startup-made-hyperrealistic-deepfake-thats-so-kocke

    1 big thing: AI’s power hunger threatens climate goals – Axios

    Posted by timmreardon on 04/30/2024
    Posted in: Uncategorized.

    By Andrew Freedman

    Artificial intelligence’s thirst for electricity is conspiring with other factors to trigger a sharp spike in electricity demand that the U.S. is only beginning to address.

    Why it matters: How lawmakers, utilities, regulators and tech companies manage this trend may determine whether America’s emissions reduction goals are met, with more fossil fuel-powered plants under consideration in the near term.

    At the same time, the rise of generative AI, which requires far more compute power than typical cloud computing functions, is also contributing to unheard of growth rates in electricity demand for utilities — particularly ones that serve large numbers of data centers.

    • However, while generative AI requires more electricity, this technology and others could be enlisted to help make data centers more flexible in how they use electricity, rather than constantly running near peak demand.

    The big picture: Call it the dark side of the push to electrify everything, from cars to water heaters, incentivized by President Biden’s landmark climate law.

    • While beneficial in the fight against climate change, such shifts are adding to demands on a grid that was already showing signs of strain.

    Zoom in: Take Dominion Energy, for example, which provides electricity to the hundreds of data centers comprising “data center alley” in northern Virginia.

    • The utility is still fully committed to generating 90% to 95% of its electricity from carbon-free sources, according to a company spokesperson.
    • But ravenous data center and residential needs call for installing more baseload power stations that can be reliably dispatched when renewable sources ebb, the company official said.
    • This is leading to planning for new natural gas plants, which burn fossil fuels and can add to global emissions.

    By the numbers: Other utilities are facing the same quandary. Georgia Power, for example, has increased its projected load growth from about 400 megawatts in January 2022 to 6,600 MW more recently.

    Threat level: Generative AI is unlikely to bring down the grid any time soon. But it is forcing data center companies, the Energy Department, utilities and utility regulators to get creative.

    • Energy Secretary Jennifer Granholm told Axios in March that electricity’s AI-driven boost is something that worries her, given the potential to undermine the administration’s climate goals.
    • She tasked the department’s science advisory board with presenting its latest thinking on this issue, which it did on April 9, showing the data center industry as being on the cusp of its third wave of growth.

    The intrigue: David Porter, vice president of electrification and sustainable energy strategy at EPRI, told Axios the biggest challenge facing utilities is building new infrastructure to meet rapidly growing energy demands.

    • This is partly because it can take a decade in the U.S. to approve new transmission lines, whereas data centers can be approved and built within 18 to 24 months.
    • “Those two things do not align very well,” Porter said in an interview.

    Context: Many of the major tech companies advancing generative AI have committed to lofty sustainability targets, and they are signing deals with clean tech startups for advanced carbon-free energy options.

    What they’re saying: With AI compute converging with other demands, “We’re almost in the perfect storm for the demand for green electrons,” Christopher Wellise, VP of sustainability at data center firm Equinix, told Axios

    China has a new plan for judging the safety of generative AI—and it’s packed with details – MIT Technology Review

    Posted by timmreardon on 04/29/2024
    Posted in: Uncategorized.


    A new proposal spells out the very specific ways companies should evaluate AI security and enforce censorship in AI models.

    By Zeyi Yang

    October 18, 2023

    This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

    Ever since the Chinese government passed a law on generative AI back in July, I’ve been wondering how exactly China’s censorship machine would adapt for the AI era. The content produced by generative AI models is more unpredictable than traditional social media. And the law left a lot unclear; for instance, it required companies “that are capable of social mobilization” to submit “security assessments” to government regulators, though it wasn’t clear how the assessment would work. 

    Last week we got some clarity about what all this may look like in practice. 

    On October 11, a Chinese government organization called the National Information Security Standardization Technical Committee released a draft document that proposed detailed rules for how to determine whether a generative AI model is problematic. Often abbreviated as TC260, the committee consults corporate representatives, academics, and regulators to set up tech industry rules on issues ranging from cybersecurity to privacy to IT infrastructure.

    Unlike many manifestos you may have seen about how to regulate AI, this standards document is very detailed: it sets clear criteria for when a data source should be banned from training generative AI, and it gives metrics on the exact number of keywords and sample questions that should be prepared to test out a model.

    Matt Sheehan, a global technology fellow at the Carnegie Endowment for International Peace who flagged the document for me, said that when he first read it, he “felt like it was the most grounded and specific document related to the generative AI regulation.” He added, “This essentially gives companies a rubric or a playbook for how to comply with the generative AI regulations that have a lot of vague requirements.” 

    It also clarifies what companies should consider a “safety risk” in AI models—since Beijing is trying to get rid of both universal concerns, like algorithmic biases, and content that’s only sensitive in the Chinese context. “It’s an adaptation to the already very sophisticated censorship infrastructure,” he says.

    So what do these specific rules look like?

    On training: All AI foundation models are currently trained on many corpora (text and image databases), some of which have biases and unmoderated content. The TC260 standards demand that companies not only diversify the corpora (mixing languages and formats) but also assess the quality of all their training materials.

    How? Companies should randomly sample 4,000 “pieces of data” from one source. If over 5% of the data is considered “illegal and negative information,” this corpus should be blacklisted for future training.

    The percentage may seem low at first, but we don’t know how it compares with real-world data. “For me, that’s pretty interesting. Is 96% of Wikipedia okay?” Sheehan wonders. But the test would likely be easy to pass if the training data set were something like China’s state-owned newspaper archives, which have already been heavily censored, he points out—so companies may rely on them to train their models.

    On the scale of moderation: AI companies should hire “moderators who promptly improve the quality of the generated content based on national policies and third-party complaints.” The document adds that “the size of the moderator team should match the size of the service.” 

    Given that content moderators have already become the largest part of the workforce in companies like ByteDance, it seems likely the human-driven moderation and censorship machine will only grow larger in the AI era.

    On prohibited content: First, companies need to select hundreds of keywords for flagging unsafe or banned content. The standards define eight categories of political content that violates “the core socialist values,” each of which needs to be filled with 200 keywords chosen by the companies; then there are nine categories of “discriminative” content, like discrimination based on religious beliefs, nationality, gender, and age. Each of these needs 100 keywords.

    Then companies need to come up with more than 2,000 prompts (with at least 20 for each category above) that can elicit test responses from the models. Finally, the models need to run tests to guarantee that fewer than 10% of the generated responses break the rules.

    On more sophisticated and subtle censorship: While a lot in the proposed standards is about determining how to carry out censorship, the draft interestingly asks that AI models not make their moderation or censorship too obvious. 

    For example, some current Chinese AI models may refuse to answer any prompt with the text “Xi Jinping” in it. This proposal asks companies to find prompts related to topics like the Chinese political system or revolutionary heroes that are okay to answer, and AI models can only refuse to answer fewer than 5% of them. “It’s saying both ‘Your model can’t say bad things’ [and] ‘We also can’t make it super obvious to the public that we are censoring everything,’” Sheehan explains.

    It’s all fascinating, right? 

    But it’s important to clarify what this document is and isn’t. Even though TC260 receives supervision from Chinese government agencies, these standards are not laws. There are no penalties if companies don’t comply with them. 

    But proposals like this often feed into future laws or work alongside them. And this proposal helps spell out the fine print that’s omitted in China’s AI regulations. “I think companies are going to follow this, and regulators are going to treat these as binding,” Sheehan says.

    It’s also important to think about who is shaping the TC260 standards. Unlike most laws in China, these rules explicitly receive input from experts hired by tech companies and will disclose the contribution after the standards are finalized. These people know the subject matter best, but they also have a financial interest. Companies like Huawei, Alibaba, and Tencent have been heavily influential in the past TC260 standards.

    This means that this document can also be seen as a reflection of how Chinese tech companies want their products to be regulated.Frankly, it’s not wise to hope that regulations never come, and these companies have an incentive to influence how the rules are made.

    As other countries work to regulate AI, I believe, the Chinese AI safety standards will have an immense impact on the global AI industry. At best, they propose technical details for general content moderation; at worst, they signal the beginning of new censorship regimes. 

    This newsletter can only say so much, but there are many more rules in the document that deserve further studying. They could still change—TC260 is seeking feedback on the standards until October 25—but when a final version is out, I’d love to know what people think of it, including AI safety experts in the West. 

    Do you think these detailed requirements are reasonable? Let me know your thoughts by writing to zeyi@technologyreview.com.

    Catch up with China

    1. The European Union reprimanded TikTok—as well as Meta and X—for not doing enough to fight misinformation on the conflict between Israel and Hamas. (Reuters $)

    2. The Epoch Times, a newspaper founded two decades ago by the Falun Gong group as an anti–Chinese Communist Party propaganda channel, now claims to be the fourth-biggest newspaper in the US by subscriber count, a success it achieved by embracing right-wing politics and conspiracy theories. (NBC News)

    3. Midjourney, the popular image-making AI software, isn’t creative or knowledgeable when it responds to the prompt “a plate of Chinese food.” Other prompts reveal even more cultural stereotypes embedded in AI. (Rest of World)

    4. China plans to increase the country’s computing power by 50% between now and 2025. How? By building more data centers, using them more efficiently, and improving on data storage technologies. (CNBC)

    5. India’s financial crimes agency arrested a Chinese employee of smartphone maker Vivo after the company—the second-largest smartphone brand in India—was accused of transferring funds illegally to a news website that has been linked to Chinese propaganda efforts. (BBC)

    6. Leaked internal Huawei communications show how the company tried to cultivate relationships with high-ranking Greek officials and push the limits of the country’s anticorruption laws. (New York Times $)

    7. US Senate Majority Leader Chuck Schumer and five other senators visited Beijing and met with Chinese president Xi Jinping last week. The war between Israel and Hamas was the focus of their conversation. (Associated Press)

    8. Cheng Lei, an Australian citizen who worked in China as a business reporter, was finally released from Chinese detention after three years. (BBC)

    Lost in translation

    As Chinese TVs and projectors get smarter, the user experience has also become more frustrating amid an the inundation of advertisements. According to the Chinese tech publication Leikeji, many smart TVs force users to watch an ad, sometimes 40 seconds long, whenever they turn on the TV. Even though there are regulations in place that require TV makers to offer a “skip” button, these options are often hidden in the deepest corners of system settings. Users also complained about TV providers that require multiple payments for different levels of content access, making it too complicated to watch their favorite shows.

    Earlier this year, the Chinese State Administration of Radio, Film, and Television began to address these concerns. A new government initiative aims to ensure that 80% of cable TV users and 85% of streaming users can immediately access live TV channels after turning on their TVs. Some TV makers, like Xiaomi, are also belatedly offering the option to permanently disable opening ads.

    One more thing

    What do you look for the most when you’re dating? If you answer, “They have to work for the government,” you should come to Zhejiang, China. The internal communications app for Zhejiang government workershas a feature where people can swipe left and right on the dating profiles of other single government employees. Apparently, the Chinese government is endorsing office romances.

    Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/10/18/1081846/generative-ai-safety-censorship-china/amp/

    Analyzing the Rise of AI: Insights from RAND

    Posted by timmreardon on 04/29/2024
    Posted in: Uncategorized.

    In this page:

    • Military, Defense, and National Security
    • Health and Well-Being
    • Misinformation, Disinformation, and Media Literacy
    • Jobs, Workers, and the Economy
    • Technology Governance and Regulation
    • Featured Experts
    Conceptual represenatation of the effects of artificial intelligence, image by Florence Lo/Reuters; design by Haley Okuley/RAND

    Image by Florence Lo/Reuters; design by Haley Okuley/RAND

    Artificial intelligence—from machine learning that’s already widely used today to the possible artificial general intelligence of the future—has the power to transform the way we live, work, and interact.

    AI tools are evolving quickly, and decisionmakers are grappling with how to maximize the potential benefits, minimize the short- and long-term risks, and plan for an uncertain future.

    RAND’s rigorous and independent research can help. Our experts have been studying a wide range of questions about the effects and uses of AI: Which jobs are likely to be most affected? How might AI tools be used to support military decisionmaking? What is required to ensure that algorithms don’t worsen inequity?

    Answers to these and other important questions can help leaders and policymakers better understand AI and make informed decisions about how to balance promoting innovation while safeguarding against any dangers.

    FEATURED INSIGHTS

    • Q&AIs AI an Existential Risk? Q&A with RAND Experts
    • REPORTUsing Artificial Intelligence Tools in K–12 Classrooms
    • REPORTAddressing the Challenges of Algorithmic Equity 

    Get updates from RAND delivered straight to your inbox.EmailSIGN UP

    Military, Defense, and National Security

    • REPORTDoes AI Increase the Operational Risk of Biological Attacks?When researchers role-playing as malign nonstate actors were assigned to realistic scenarios and tasked with planning a biological attack, there was no statistically significant difference in the viability of plans generated with or without the assistance of the current generation of large language models.Jan 25, 2024
    • COMMENTARYBuilding a Foundation for Strategic Stability with China on AIApr 2, 2024
    • REPORTPrivate-Sector Innovation Could Help Defend TaiwanMar 5, 2024
    • REPORTAUKUS Collaboration on Responsible Military AIFeb 6, 2024
    • REPORTCan Machine Learning Improve Military Decisionmaking?Sep 19, 2023
    • REPORTArtificial Intelligence Systems in Intelligence AnalysisAug 26, 2021
    • REPORTArmy Analytic CapabilitiesApr 12, 2021
    • COMMENTARYBridging Tech and Humanity: The Role of Foundation Models in Reducing Civilian HarmOct 17, 2023
    • REPORTExploring the Feasibility and Utility of Machine Learning-Assisted Command and ControlJul 15, 2021
    • REPORTTechnology Innovation and the Future of Air Force Intelligence Analysis: Findings and RecommendationsJan 27, 2021
    • RESEARCH BRIEFThe Department of Defense’s Posture for Artificial Intelligence: Assessment and Recommendations for ImprovementJan 26, 2021

    Health and Well-Being

    • REPORTUsing AI to Identify Youth Suicide Risk: What Does the Evidence Say?In response to the youth mental health crisis, some schools have begun using artificial intelligence to help identify students at risk for suicide and self-harm. How are these tools being used? Are they accurate? And what risks might they bring?Dec 5, 2023
    • COMMENTARYRobots, Drones, and AI, Oh My: Navigating the New Frontier of Military MedicineJan 8, 2024
    • COMMENTARYNations Must Collaborate on AI and Biotech—or Be Left BehindOct 31, 2023
    • REPORTMachine Learning and Gene Editing at the Helm of a Societal EvolutionOct 23, 2023
    • COMMENTARYProgress or Peril? The Brave New World of Self-Driving Science LabsSep 18, 2023
    • JOURNAL ARTICLEUsing Claims-Based Algorithms to Predict Activity, Mobility, and Memory LimitationsAug 16, 2023
    • JOURNAL ARTICLEArtificial Intelligence (AI) use in adult social careJun 30, 2023
    • JOURNAL ARTICLEArtificial Intelligence in the COVID-19 ResponseJun 22, 2023
    • ARTICLEThe Internet of Bodies Will Change Everything, for Better or WorseOct 29, 2020

    Misinformation, Disinformation, and Media Literacy

    • COMMENTARYU.S. Adversaries Can Use Generative AI for Social Media ManipulationUsing generative artificial intelligence technology, U.S. adversaries can manufacture fake social media accounts that seem real. These accounts can be used to advance narratives that serve the interests of those governments and pose a direct challenge to democracies. U.S. government, technology, and policy communities should act fast to counter this threat.Sep 7, 2023
    • COMMENTARYBiden Should Call China’s Bluff on Responsible AI to Safeguard the 2024 ElectionsNov 14, 2023
    • JOURNAL ARTICLECan People Identify Deepfakes?Aug 23, 2023
    • COMMENTARYThe AI Conspiracy Theories Are ComingJun 22, 2023
    • COMMENTARYThe Threat of DeepfakesJul 6, 2022
    • REPORTMachine Learning Can Detect Online Conspiracy TheoriesApr 29, 2021

    Jobs, Workers, and the Economy 

    • VISUALIZATIONRage Against the Machine? How AI Could Affect the Future of WorkUnderstanding how technology and artificial intelligence have—and have not—affected jobs in the past can provide insights on the future of the American workforce. What is the relationship between occupational exposure and technologies, wages, and employment related to artificial intelligence?Oct 11, 2023
    • REPORTWill We Hold Algorithms Accountable for Bad Decisions?Oct 12, 2023
    • REPORTTechnological and Economic Threats to the U.S. Financial SystemFeb 13, 2024
    • REPORTAdvancing Equitable Decisionmaking for the Department of Defense Through Fairness in Machine LearningJun 13, 2023
    • REPORTCan Artificial Intelligence Help Improve Air Force Talent Management?Jan 19, 2021
    • COMMENTARYMoney, Markets, and Machine Learning: Unpacking the Risks of Adversarial AIAug 31, 2023

    Technology Governance and Regulation

    • TESTIMONYAdvancing Trustworthy Artificial Intelligence The United States can make safety a differentiator for the artificial intelligence (AI) industry, just as it did for the early aviation, automotive, and pharmaceutical industries. Government involvement in safety standards could build consumer trust in AI that strengthens the U.S. position as a market leader. Jun 22, 2023
    • COMMENTARYGenerative Artificial Intelligence Threats to Information IntegrityApr 16, 2024
    • COMMENTARYPolicymaking Needs to Get Ahead of Artificial IntelligenceJan 12, 2024
    • COMMENTARYThe Case for and Against AI WatermarkingJan 17, 2024
    • COMMENTARYPhilosophical Debates About AI Risks Are a DistractionDec 22, 2023
    • TESTIMONYPreparing the Federal Response to Advanced TechnologiesSep 19, 2023
    • COMMENTARYA Model for Regulating AIAug 16, 2023
    • COMMENTARYTackling the Existential Threats from Artificial IntelligenceJul 11, 2023
    • TESTIMONYEnsuring That Government Use of Technology Serves the Public Jun 22, 2023
    • TESTIMONYArtificial Intelligence: Challenges and Opportunities for the Department of DefenseApr 19, 2023
    • TESTIMONYChallenges to U.S. National Security and Competitiveness Posed by Artificial IntelligenceMar 8, 2023

    RAND Research That Uses AI

    Beyond studying the nexus of AI and public policy, RAND experts regularly use AI in their research—often in novel ways. Here’s just a small sample of RAND studies that put AI to work:

    • Conflict Projections in U.S. Central Command: Incorporating Climate Change2023
    • Deception Detection 2022
    • Facts Versus Opinions: How the Style and Language of News Presentation Is Changing in the Digital Age 2019
    • Monitoring Social Media: Lessons for Future Department of Defense Social Media Analysis in Support of Information Operations 2017
    • Examining ISIS Support and Opposition Networks on Twitter 2016

    Featured Experts

    Scores of RAND researchers are studying AI from countless angles, providing key insights that can inform the use and regulation of AI tools now and in the future.

    • I don’t think AI poses an irreversible harm to humanity. I think it can worsen our lives. I think it can have long-lasting harm. But I think it’s ultimately something that we can recover from. Jonathan W. WelburnSenior ResearcherSource: rand.org
    • From our recent research, it appears that extremist groups have been testing AI tools, including chatbots, but there seems to be little evidence of large-scale coordinated efforts in this space. However, chatbots are likely to present a risk, as they are capable of recognising and exploiting emotional vulnerabilities and can encourage violent behaviours.Pauline PailléSenior AnalystSource: Euronews
    • I don’t think AI is going to breed a population of people who can’t think for themselves. I actually think there’s a lot of promise in what AI can do to facilitate teaching, to facilitate critical thinking, and to teach in ways that previously we had been unable to teach.Christopher Joseph DossPolicy ResearcherSource: Education Week

    See all RAND AI experts

    See all RAND AI content

    Article link: https://www.rand.org/latest/artificial-intelligence.html?

    Who should be the head of generative AI — and what they should do – MIT Sloan Management

    Posted by timmreardon on 04/27/2024
    Posted in: Uncategorized.


    by Kara Baskin

     Mar 28, 2024


    Adopting artificial intelligence takes time and proper leadership. Heads of generative AI manage how companies use the technology to manage people and help set the right tone. 

    Generative artificial intelligence has rapidly become an organizational priority. After it was launched in late 2022, ChatGPT had 100 million active users in less than two months, and many executives now consider managing AI’s impact to be a leadership priority

    But adoption takes time and requires proper stewardship. In a recent webinarhosted by MIT Sloan Management Review, London Business School professor Lynda Gratton discussed the new roles companies are creating to guide AI initiatives, the job responsibilities of these leaders, and how companies can use AI for people management.

    Introducing the “head of generative AI”

    Many companies are introducing positions to guide generative AI use. Last year, Coca-Cola moved Pratik Thakar from his role as head of global creative strategy to senior director of generative AI. Likewise, advertising agency M&C Saatchi named chief data strategy officer James Calvert its new head of generative AI, Gratton noted.

    She pinpointed five key job responsibilities for those in these new roles:

    1. Steering the strategic direction and alignment of AI work.
    2. Establishing and sustaining a collaborative generative AI ecosystem.
    3. Monitoring and evaluating generative AI experiments to inform best practices.
    4. Identifying high-impact use cases for scalability.
    5. Overseeing generative AI integration across business units.

    Some generative AI leaders might have a creative background; others could come from tech. Gratton said background matters less than a willingness to experiment.

    “You want somebody who’s got an experimental mindset, who sees this as a learning opportunity and sees it as an organizational structuring issue,” she said. “The innovation part is what’s really crucial.”

    Ways to use AI for people management 

    The head of AI could encourage use of the technology to help with managing employees, Gratton said. This encompasses three key areas:

    Talent development. Companies can use chatbots and other tools to recruit people and help them manage their careers.

    Productivity. AI can be used to create assessments, give feedback, manage collaboration, and provide skills training.

    Change management. This includes both internal and external knowledge management. “We have so much knowledge in our organizations … but we don’t know how to find it,” Gratton said. “And it seems to me that this is an area that we’re really focusing on in terms of generative AI.”

    Trust, collaboration, and open-mindedness are essential

    As companies embark on their generative AI journeys and add leadership in this area, transparency is key.

    Leadership insights for innovating amid change3 ways to center humans in artificial intelligence efforts

    “Employees need to feel that they trust their managers,” Gratton said. “This isn’t just a piece about technology; it’s also a piece about how you manage change.”

    Some workers fear that AI could replace them. A good leader will address those concerns and generate companywide enthusiasm across roles and age groups.

    “The CEO and the leadership team have got to set a narrative. When you have high levels of ambiguity, the job of a leader is to tell stories about the future, to help people think: ‘This is where we’re walking; this is why it’s happening; this is how we feel about it,’” she said. This includes getting everyone in the organization involved in thinking about how to use generative AI.

    Leaders should remember that buy-in across all career stages and skill levels is essential. Generative AI isn’t just the domain of youth.

    “We shouldn’t stereotype the idea that it’s only young people who are thinking about this. In fact, if you look at people at every stage in their working life, they are very interested — and there’s even some data that show the over-50s are really working to experiment and to understand generative AI,” Gratton said. “Don’t be blinkered about what you expect your employees to do.”

    Article link: https://mitsloan.mit.edu/ideas-made-to-matter/who-should-be-head-generative-ai-and-what-they-should-do?

    AI Report Shows ‘Startlingly Rapid’ Progress—And Ballooning Costs – Scientific American

    Posted by timmreardon on 04/26/2024
    Posted in: Uncategorized.

    A new report finds that AI matches or outperforms people at tasks such as competitive math and reading comprehension

    BY NICOLA JONES & NATURE MAGAZINE

    Artificial intelligence (AI) systems, such as the chatbot ChatGPT, have become so advanced that they now very nearly match or exceed human performance in tasks including reading comprehension, image classification and competition-level mathematics, according to a new report. Rapid progress in the development of these systems also means that many common benchmarks and tests for assessing them are quickly becoming obsolete.

    These are just a few of the top-line findings from the Artificial Intelligence Index Report 2024, which was published on 15 April by the Institute for Human-Centered Artificial Intelligence at Stanford University in California. The report charts the meteoric progress in machine-learning systems over the past decade.

    In particular, the report says, new ways of assessing AI — for example, evaluating their performance on complex tasks, such as abstraction and reasoning — are more and more necessary. “A decade ago, benchmarks would serve the community for 5–10 years” whereas now they often become irrelevant in just a few years, says Nestor Maslej, a social scientist at Stanford and editor-in-chief of the AI Index. “The pace of gain has been startlingly rapid.”

    Stanford’s annual AI Index, first published in 2017, is compiled by a group of academic and industry specialists to assess the field’s technical capabilities, costs, ethics and more — with an eye towards informing researchers, policymakers and the public. This year’s report, which is more than 400 pages long and was copy-edited and tightened with the aid of AI tools, notes that AI-related regulation in the United States is sharply rising. But the lack of standardized assessments for responsible use of AI makes it difficult to compare systems in terms of the risks that they pose.

    The rising use of AI in science is also highlighted in this year’s edition: for the first time, it dedicates an entire chapter to science applications, highlighting projects including Graph Networks for Materials Exploration (GNoME), a project from Google DeepMind that aims to help chemists discover materials, and GraphCast, another DeepMind tool, which does rapid weather forecasting.

    GROWING UP

    The current AI boom — built on neural networks and machine-learning algorithms — dates back to the early 2010s. The field has since rapidly expanded. For example, the number of AI coding projects on GitHub, a common platform for sharing code, increased from about 800 in 2011 to 1.8 million last year. And journal publications about AI roughly tripled over this period, the report says.

    Much of the cutting-edge work on AI is being done in industry: that sector produced 51 notable machine-learning systems last year, whereas academic researchers contributed 15. “Academic work is shifting to analysing the models coming out of companies — doing a deeper dive into their weaknesses,” says Raymond Mooney, director of the AI Lab at the University of Texas at Austin, who wasn’t involved in the report.

    That includes developing tougher tests to assess the visual, mathematical and even moral-reasoning capabilities of large language models (LLMs), which power chatbots. One of the latest tests is the Graduate-Level Google-Proof Q&A Benchmark (GPQA), developed last year by a team including machine-learning researcher David Rein at New York University.

    The GPQA, consisting of more than 400 multiple-choice questions, is tough: PhD-level scholars could correctly answer questions in their field 65% of the time. The same scholars, when attempting to answer questions outside their field, scored only 34%, despite having access to the Internet during the test (randomly selecting answers would yield a score of 25%). As of last year, AI systems scored about 30–40%. This year, Rein says, Claude 3 — the latest chatbot released by AI company Anthropic, based in San Francisco, California — scored about 60%. “The rate of progress is pretty shocking to a lot of people, me included,” Rein adds. “It’s quite difficult to make a benchmark that survives for more than a few years.”

    COST OF BUSINESS

    As performance is skyrocketing, so are costs. GPT-4 — the LLM that powers ChatGPT and that was released in March 2023 by San Francisco-based firm OpenAI — reportedly cost US$78 million to train. Google’s chatbot Gemini Ultra, launched in December, cost $191 million. Many people are concerned about the energy use of these systems, as well as the amount of water needed to cool the data centres that help to run them. “These systems are impressive, but they’re also very inefficient,” Maslej says.

    Costs and energy use for AI models are high in large part because one of the main ways to make current systems better is to make them bigger. This means training them on ever-larger stocks of text and images. The AI Index notes that some researchers now worry about running out of training data. Last year, according to the report, the non-profit research institute Epoch projected that we might exhaust supplies of high-quality language data as soon as this year. (However, the institute’s most recent analysis suggests that 2028 is a better estimate.)

    Ethical concerns about how AI is built and used are also mounting. “People are way more nervous about AI than ever before, both in the United States and across the globe,” says Maslej, who sees signs of a growing international divide. “There are now some countries very excited about AI, and others that are very pessimistic.”

    In the United States, the report notes a steep rise in regulatory interest. In 2016, there was just one US regulation that mentioned AI; last year, there were 25. “After 2022, there’s a massive spike in the number of AI-related bills that have been proposed” by policymakers, Maslej says.

    Regulatory action is increasingly focused on promoting responsible AI use. Although benchmarks are emerging that can score metrics such as an AI tool’s truthfulness, bias and even likability, not everyone is using the same models, Maslej says, which makes cross-comparisons hard. “This is a really important topic,” he says. “We need to bring the community together on this.”

    This article is reproduced with permission and was first published on April 15, 2024.

    Article link: https://www.scientificamerican.com/article/stanford-ai-index-rapid-progress/?

    Advanced SAM.gov | Understanding Notice Types

    Posted by timmreardon on 04/23/2024
    Posted in: Uncategorized.

    The main tool federal agencies use to communicate with government contractors is SAM.gov. Become an advance user of the buyers’ communication tool to win more.

    This article summarizes my slides from a training I did today (4/22/2024) about the 9 Notice Types in SAM.

    Understand SAM’s Fit in Federal Buyer Toolbox

    Federal agency buyers communicate with government contractors (industry) using multiple tools. Here are a few of the most important ones:

    • System for Award Management (SAM)
    • Federal Procurement Data System (FPDS)
    • USASpending | Primarily a nice dashboard of FPDS data
    • Contractor Performance Assessment Reporting System (CPARS) | This is where federal buyers communicate to each other and to incumbents about contract performance.
    • Bid Boards (e.g., DIBBS, ARC, etc.) | DIBBS is a DLA based tool primarily for product purchases. ARC is the tool used by the Intelligence Community primarily since there are classification concerns.
    • Contract Vehicle Portals | For example, Seaport NxG is the Navy’s primary contract vehicle. Notices appearing here, do not appear in SAM.

    The 9 Notice Types in SAM

    1. Special Notice | Heads up on Industry Days, APBI, or LRAFs
    2. Sources Sought | Seeking possible vendors
    3. Presolicitation | Makes known a solicitation may follow soon
    4. Consolidate / (substantially) Bundle | Intent to Bundle Requirements
    5. Solicitation | RFQ, RFP, etc.
    6. Combined Synopsis / Solicitation | Solicitations with specifications
    7. Award Notice | Vendor who received an award and the amount agreed
    8. Justification | Needed when a solicitation isn’t posted and one vendor used
    9. Sale of Surplus Property | Usually for real estate no longer needed

    Importance of Each Notice Type

    I racked and stacked the notice types based on their value to your company. The rationale I used is ‘shift left’ – any notice that helps you learn about and engage on an opportunity sooner is better.

    Low Value

    • Award Notice
    • Consolidate / (substantially) Bundle
    • Justification
    • Sale of Surplus Property

    Medium Value

    • Combined Synopsis / Solicitation
    • Solicitation

    High Value

    • Sources Sought
    • Special Notices
    • Presolicitation

    How to Rapidly Process Many SAM.Gov Notices

    With hundreds or even thousands of opportunities being processed in SAM, you need a way to process them fast to find those best for you.

    Before you follow my three-step approach below, make sure you document your ‘Slam Dunk’ opportunity criteria. Watch my other training to understand how to define your ‘slam dunk’ criteria in 30 minutes or less.

    Three Step Approach

    1. Triage By Title | Seconds Per Opportunity | Use Tabs
    2. Triage By Description | SAM Field Only
    3. Triage By Files | Before Adding to Pipeline

    Video Replay of Notice Types in SAM.Gov

    Here’s a replay from the training I did here on LinkedIn about the 9 types of notice within the System for Awards Management (SAM).

    View media

    Article link: https://www.linkedin.com/pulse/advanced-samgov-understanding-notice-types-neil-mcdonnell-gnsee

    The who, what, and where of AI adoption in America – MIT Sloan

    Posted by timmreardon on 04/20/2024
    Posted in: Uncategorized.

    by Brian Eastwood

    Feb 7, 2024

    Why It Matters

    A new study finds that artificial intelligence is being adopted unevenly in the U.S., with use clustered in large companies and industries such as manufacturing and health care.

    It’s not hard to find headlines that suggest artificial intelligence is taking over the business world, from content creation to decision support and process automation.

    But reality looks different. A new working paper from the National Bureau of Economic Research about early adoption of AI in the U.S. provides a more nuanced look at which companies are adopting AI, where they are located, and what technologies they are using.

    The research shows variation in AI adoption, according to Kristina McElheran, a visiting scholar with the MIT Initiative on the Digital Economy and the paper’s lead author. Just 6% of U.S. companies used AI in 2017, the researchers found, and AI use was concentrated in larger companies and in industries such as manufacturing and information technology. Adoption was also clustered in some “superstar” cities, such as San Francisco, San Antonio, and Nashville.

    “The narrative is that AI is everywhere all at once, but the data shows it’s harder to do than people seem interested in discussing,” said McElheran, an assistant professor at the University of Toronto.

    “The digital age has arrived, but it has arrived unevenly,” she said.  

    AI use in America: large companies, certain sectors 

    Research about AI adoption tends to focus on indirect measures of economic activity that refer to AI use — patents, academic publications, or job descriptions that mention AI, McElheran said.

    For a more direct measurement, the researchers joined forces with the U.S. Census Bureau and the National Center for Science and Engineering Statistics to conduct the newly developed Annual Business Survey beginning in 2018. The survey asked firms to describe their use of digital information, cloud computing, types of AI, and other advanced technologies in the prior year. The researchers took data from 447,000 responses from the 2018 survey, linked it to 2017 data in the Census Bureau’s Longitudinal Business Database, and weighted it to represent more than 4 million firms nationwide.

    The researchers defined AI adoption as using AI for production — “not in invention, not in aspiration, and not even in commercialization from firms that are selling things that rely on AI,” McElheran said.

    The finding that just 6% of companies reported using AI in 2017 is still relevant today, McElheran said,  pointing to a November 2023 Census Bureau survey that showed that fewer than 4% of companies use AI to produce goods and services.

    The initial, in-depth survey showed other early trends:

    • AI use was highest among large companies. More than 50% of companies with more than 5,000 employees were using AI, as were more than 60% of companies with more than 10,000 employees.
    • Use varied among sectors. About 12% of firms in manufacturing, information services, and health care were using AI, compared with 4% in construction and retail.
    • AI adoption is happening in some superstar cities, but it has also clustered in some unlikely places. These include manufacturing hubs in the Midwest as well as Southern cities with fewer companies overall than tech hubs in Silicon Valley, the Boston area, or New York City. “Use of AI in production is happening in different places than just the areas that are inventing and commercializing AI-based technologies,” McElheran said.

    Startups that embrace AI have younger leaders 

    To help determine the characteristics of companies that are more likely to use AI, the researchers identified 75,000 startups that participated in the 2018 Annual Business Survey and weighted their responses to represent 740,000 firms.

    The researchers found that startups using AI were more likely to have younger, more highly educated, and more highly experienced leaders than startups that were not using AI. Venture capital backing and a focus on process innovation were also associated with AI adoption.  

    “The firms that have other things going for them tend to be the ones that can leverage bleeding-edge technology like AI,” McElheran said. “The ability to reconfigure how work gets done and how things get made is an important predictor of whether AI is used in production.”

    This matters when comparing AI to other types of general-purpose technologies. Innovations such as enterprise software are complex implementations that depend on a completely different set of workflows.

    But AI is more similar to a point solution, McElheran said. “At an incremental level, you can transform a given task, or replicate an individual human task,” she said. “It’s not suddenly everywhere all at once.”

    This blessing can quickly become a curse, though. Innovate one part of a system, McElheran noted, and the rest of the system needs to innovate at the same pace. Otherwise, “things start to come unglued.” That’s why firms focusing on process innovation — and benefiting from the resources necessary to move process innovation along — are more likely than others to be using AI.

    Some of those AI users are in sectors not typically associated with cutting-edge technology, such as manufacturing and health care. The former is closely linked to manufacturing’s use of robotics. The latter stems from a range of use cases, from optimizing operating room schedules to automating back-office coding and billing processes.

    AI adoption requires overcoming inertia and adjustment costs

    Ultimately, the biggest barriers to AI adoption may be inertia and adjustment costs. This was true with the internet, word processors, and even double-entry bookkeeping. Both factors exist for good reasons, McElheran said, and shouldn’t be discounted.

    Routine is embedded in work practices at many companies. “What do you do at the office every Monday morning?” McElheran asked. “Very few people start with a blank slate to redesign the activities that occupy their time and attention. For reasons we’ve known since the steam engine, it takes a while for firms and for people to adjust.” While helpful for day-to-day operations, routines tend to work against change.

    Adopting new technology also typical entails costs somewhere. Firms are primed to use it, and consumers are primed to benefit from it, but these gains don’t come for free. Competition can lead to job losses and other economic adjustments. Firms prioritize workers who already possess the skills to use new tech. As noted in another paper co-authored by McElheran, this means workers over age 50 often miss out on the same salary increases their younger colleagues enjoy from digital transformation.

    “When we see trends with upside potential, we can’t ignore the dark side that can overturn the aspirations that people have for their jobs and their children” McElheran said. “We need an approach to AI that is realistic and evidence-based about both the benefits and costs for different pockets of the economy and society.”

    The paper is authored by McElheran; University of British Columbia professor J. Frank Li; Stanford University professor Erik Brynjolfsson, PhD ’91; and U.S. Census Bureau economists Zachary Kroff, Emin Dinlersoz, Lucia S. Foster, and Nikolas Zolas.

    Article link: https://mitsloan.mit.edu/ideas-made-to-matter/who-what-and-where-ai-adoption-america?

    Posts navigation

    ← Older Entries
    Newer Entries →
    • Search site

    • Follow healthcarereimagined on WordPress.com
    • Recent Posts

      • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
      • Governance Before Crisis We still have time to get this right. 01/21/2026
      • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
      • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
      • ChatGPT Health Is a Terrible Idea 01/09/2026
      • Choose the human path for AI – MIT Sloan 01/09/2026
      • Why AI predictions are so hard – MIT Technology Review 01/07/2026
      • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
      • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
      • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
    • Categories

      • Accountable Care Organizations
      • ACOs
      • AHRQ
      • American Board of Internal Medicine
      • Big Data
      • Blue Button
      • Board Certification
      • Cancer Treatment
      • Data Science
      • Digital Services Playbook
      • DoD
      • EHR Interoperability
      • EHR Usability
      • Emergency Medicine
      • FDA
      • FDASIA
      • GAO Reports
      • Genetic Data
      • Genetic Research
      • Genomic Data
      • Global Standards
      • Health Care Costs
      • Health Care Economics
      • Health IT adoption
      • Health Outcomes
      • Healthcare Delivery
      • Healthcare Informatics
      • Healthcare Outcomes
      • Healthcare Security
      • Helathcare Delivery
      • HHS
      • HIPAA
      • ICD-10
      • Innovation
      • Integrated Electronic Health Records
      • IT Acquisition
      • JASONS
      • Lab Report Access
      • Military Health System Reform
      • Mobile Health
      • Mobile Healthcare
      • National Health IT System
      • NSF
      • ONC Reports to Congress
      • Oncology
      • Open Data
      • Patient Centered Medical Home
      • Patient Portals
      • PCMH
      • Precision Medicine
      • Primary Care
      • Public Health
      • Quadruple Aim
      • Quality Measures
      • Rehab Medicine
      • TechFAR Handbook
      • Triple Aim
      • U.S. Air Force Medicine
      • U.S. Army
      • U.S. Army Medicine
      • U.S. Navy Medicine
      • U.S. Surgeon General
      • Uncategorized
      • Value-based Care
      • Veterans Affairs
      • Warrior Transistion Units
      • XPRIZE
    • Archives

      • January 2026 (8)
      • December 2025 (11)
      • November 2025 (9)
      • October 2025 (10)
      • September 2025 (4)
      • August 2025 (7)
      • July 2025 (2)
      • June 2025 (9)
      • May 2025 (4)
      • April 2025 (11)
      • March 2025 (11)
      • February 2025 (10)
      • January 2025 (12)
      • December 2024 (12)
      • November 2024 (7)
      • October 2024 (5)
      • September 2024 (9)
      • August 2024 (10)
      • July 2024 (13)
      • June 2024 (18)
      • May 2024 (10)
      • April 2024 (19)
      • March 2024 (35)
      • February 2024 (23)
      • January 2024 (16)
      • December 2023 (22)
      • November 2023 (38)
      • October 2023 (24)
      • September 2023 (24)
      • August 2023 (34)
      • July 2023 (33)
      • June 2023 (30)
      • May 2023 (35)
      • April 2023 (30)
      • March 2023 (30)
      • February 2023 (15)
      • January 2023 (17)
      • December 2022 (10)
      • November 2022 (7)
      • October 2022 (22)
      • September 2022 (16)
      • August 2022 (33)
      • July 2022 (28)
      • June 2022 (42)
      • May 2022 (53)
      • April 2022 (35)
      • March 2022 (37)
      • February 2022 (21)
      • January 2022 (28)
      • December 2021 (23)
      • November 2021 (12)
      • October 2021 (10)
      • September 2021 (4)
      • August 2021 (4)
      • July 2021 (4)
      • May 2021 (3)
      • April 2021 (1)
      • March 2021 (2)
      • February 2021 (1)
      • January 2021 (4)
      • December 2020 (7)
      • November 2020 (2)
      • October 2020 (4)
      • September 2020 (7)
      • August 2020 (11)
      • July 2020 (3)
      • June 2020 (5)
      • April 2020 (3)
      • March 2020 (1)
      • February 2020 (1)
      • January 2020 (2)
      • December 2019 (2)
      • November 2019 (1)
      • September 2019 (4)
      • August 2019 (3)
      • July 2019 (5)
      • June 2019 (10)
      • May 2019 (8)
      • April 2019 (6)
      • March 2019 (7)
      • February 2019 (17)
      • January 2019 (14)
      • December 2018 (10)
      • November 2018 (20)
      • October 2018 (14)
      • September 2018 (27)
      • August 2018 (19)
      • July 2018 (16)
      • June 2018 (18)
      • May 2018 (28)
      • April 2018 (3)
      • March 2018 (11)
      • February 2018 (5)
      • January 2018 (10)
      • December 2017 (20)
      • November 2017 (30)
      • October 2017 (33)
      • September 2017 (11)
      • August 2017 (13)
      • July 2017 (9)
      • June 2017 (8)
      • May 2017 (9)
      • April 2017 (4)
      • March 2017 (12)
      • December 2016 (3)
      • September 2016 (4)
      • August 2016 (1)
      • July 2016 (7)
      • June 2016 (7)
      • April 2016 (4)
      • March 2016 (7)
      • February 2016 (1)
      • January 2016 (3)
      • November 2015 (3)
      • October 2015 (2)
      • September 2015 (9)
      • August 2015 (6)
      • June 2015 (5)
      • May 2015 (6)
      • April 2015 (3)
      • March 2015 (16)
      • February 2015 (10)
      • January 2015 (16)
      • December 2014 (9)
      • November 2014 (7)
      • October 2014 (21)
      • September 2014 (8)
      • August 2014 (9)
      • July 2014 (7)
      • June 2014 (5)
      • May 2014 (8)
      • April 2014 (19)
      • March 2014 (8)
      • February 2014 (9)
      • January 2014 (31)
      • December 2013 (23)
      • November 2013 (48)
      • October 2013 (25)
    • Tags

      Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
    • Upcoming Events

    Blog at WordPress.com.
    healthcarereimagined
    Blog at WordPress.com.
    • Subscribe Subscribed
      • healthcarereimagined
      • Join 153 other subscribers
      • Already have a WordPress.com account? Log in now.
      • healthcarereimagined
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...