https://hbr.org/2015/04/the-15-diseases-of-leadership-according-to-pope-francis
Archives
All posts for the month June, 2024

As artificial intelligence and machine learning continue to intersect with sensitive research efforts, the Department of Homeland Security recommended increased communication and guidance to mitigate dangerous outcomes.
Artificial intelligence has the potential to unlock secrets to the development of weapons of mass destruction, particularly chemical and biological threats, to malicious actors, according to a Department of Homeland Security reportpublicly released last week.
The full report, which was teed up in April with the release of a fact sheet, examines the role of AI in aiding but also thwarting efforts by adversaries to research, develop and use chemical, biological, radiological and nuclear weapons. The report was required under President Joe Biden’s October 2023 executive order on AI.
“The increased proliferation and capabilities of AI tools … may lead to significant changes in the landscape of threats to U.S. national security over time, including by influencing the means, accessibility, or likelihood of a successful CBRN attack,” DHS’s Countering Weapons of Mass Destruction Office states in the report.
According to the report, “known limitations in existing U.S. biological and chemical security regulations and enforcement, when combined with increased use of AI tools, could increase the likelihood of both intentional and unintentional dangerous research outcomes that pose a risk to public health, economic security, or national security.”
Specifically, the report states that the proliferation of publicly available AI tools could lower the barrier to entry for malicious actors seeking information on the composition, development and delivery of chemical and biological weapons. While access to laboratory facilities is still a hurdle, the report notes that so-called “cloud labs” could allow threat actors to remotely develop components of weapons of mass destruction in the physical world, potentially under the cover of anonymity.
CWMD recommended that the U.S. develop guidance covering the “tactical exclusion and/or protection of sensitive chemical and biological data” from public training materials for large language models, as well as more oversight governing access to remote-controlled lab facilities.
The report also said that specific federal guidance is needed to govern how biological design tools and biological- and chemical-specific foundation models are used. This guidance would ideally include “granular release practices” for source code and specification for the weight calculations used to build a relevant language model.
More generally, the report seeks the development of consensus within U.S. government regulatory agencies on how to manage AI and machine learning technologies, in particular as they intersect with chemical and biological research.
Other findings include incorporating “safe harbor” vulnerability reporting practices into organizational proceedings, practicing internal evaluation and red teaming efforts, cultivating a broader culture of responsibility among expert life science communities and responsibly investigating the benefits AI and machine learning could have in biological, chemical and nuclear contexts.
The report also envisions a role for AI in mitigating existing CBRN risks through threat detection and response, including via disease surveillance, diagnostics, “and many other applications the national security and public health communities have not identified.”
While the findings in this report are not enforceable mandates, DHS said that the contents will help shape future policy and objectives within the CWMD office.
“Moving forward, CWMD will explore how to operationalize the report’s recommendations through existing federal government coordination groups and associated efforts led by the White House,” a DHS spokesperson told Nextgov/FCW. “The Office will integrate AI analysis into established threat and risk assessments as well as into the planning and acquisition that it performs on behalf of federal, state, local, tribal and territorial partners.”
Article link: https://www.nextgov.com/artificial-intelligence/2024/06/dhs-report-details-ais-potential-amplify-biological-chemical-threats/397607/?
These are the Top 10 Emerging #Technologies which could significantly impact #society and the #economy in the next 3-5 years. This World Economic Forum report, produced in collaboration with Frontiers, draws on the expertise of scientists, researchers and futurists and covers applications in health, communication, infrastructure and sustainability. Learn more about #emergingtech24 here: https://lnkd.in/e9qfH9Mz #AMNC2

The Top 10 Emerging Technologies report is a vital source of strategic intelligence. First published in 2011, it draws on insights from scientists, researchers and futurists to identify 10 technologies poised to significantly influence societies and economies. These emerging technologiesare disruptive, attractive to investors and researchers, and expected to achieve considerable scale within five years. This edition expands its analysis by involving over 300 experts from the Forum’s Global Future Councils and a global network of comprising over 2,000 chief editors worldwide from top institutions through Frontiers, a leading publisher of academic research.
Explore the report
Report summaryKey FindingsRead more
Online readerFull reportRead more
More on the topicWorld Economic Forum Identifies Top 10 Emerging Technologies to Address Global ChallengesRead more
Article link: https://www.weforum.org/publications/top-10-emerging-technologies-2024/
Engineers use the high-fidelity models to monitor operations, plan fixes, and troubleshoot problems.
By Sarah Scoles
June 10, 2024

In January 2022, NASA’s $10 billion James Webb Space Telescope was approaching the end of its one-million-mile trip from Earth. But reaching its orbital spot would be just one part of its treacherous journey. To ready itself for observations, the spacecraft had to unfold itself in a complicated choreography that, according to its engineers’ calculations, had 344 different ways to fail. A sunshield the size of a tennis court had to deploy exactly right, ending up like a giant shiny kite beneath the telescope. A secondary mirror had to swing down into the perfect position, relying on three legs to hold it nearly 25 feet from the main mirror.
Finally, that main mirror—its 18 hexagonal pieces nestled together as in a honeycomb—had to assemble itself. Three golden mirror segments had to unfold from each side of the telescope, notching their edges against the 12 already fitted together. The sequence had to go perfectly for the telescope to work as intended.
“That was a scary time,” says Karen Casey, a technical director for Raytheon’s Air and Space Defense Systems business, which built the software that controls JWST’s movements and is now in charge of its flight operations.
Over the multiple days of choreography, engineers at Raytheon watched the events unfold as the telescope did. The telescope, beyond the moon’s orbit, was way too distant to be visible, even with powerful instruments. But the telescope was feeding data back to Earth in real time, and software near-simultaneously used that data to render a 3D video of how the process was going, as it was going. It was like watching a very nerve-racking movie.
The 3D video represented a “digital twin” of the complex telescope: a computer-based model of the actual instrument, based on information that the instrument provided. “This was just transformative—to be able to see it,” Casey says.
The team watched tensely, during JWST’s early days, as the 344 potential problems failed to make their appearance. At last, JWST was in its final shape and looked as it should—in space and onscreen. The digital twin has been updating itself ever since.
The concept of building a full-scale replica of such a complicated bit of kit wasn’t new to Raytheon, in part because of the company’s work in defense and intelligence, where digital twins are more popular than they are in astronomy.
JWST, though, was actually more complicated than many of those systems, so the advances its twin made possible will now feed back into that military side of the business. It’s the reverse of a more typical story, where national security pursuits push science forward. Space is where non-defense and defense technologies converge, says Dan Isaacs, chief technology officer for the Digital Twin Consortium, a professional working group, and digital twins are “at the very heart of these collaborative efforts.”
As the technology becomes more common, researchers are increasingly finding these twins to be productive members of scientific society—helping humans run the world’s most complicated instruments, while also revealing more about the world itself and the universe beyond.
800 million data points
The concept of digital twins was introduced in 2002 by Michael Grieves, a researcher whose work focused on business and manufacturing. He suggested that a digital model of a product, constantly updated with information from the real world, should accompany the physical item through its development.
But the term “digital twin” actually came from a NASA employee named John Vickers, who first used it in 2010 as part of a technology road map report for the space agency. Today, perhaps unsurprisingly, Grieves is head of the Digital Twins Institute, and Vickers is still with NASA, as its principal technologist.
Since those early days, technology has advanced, as it is wont to do. The Internet of Things has proliferated, hooking real-world sensors stuck to physical objects into the ethereal internet. Today, those devices number more than 15 billion, compared with mere millions in 2010. Computing power has continued to increase, and the cloud—more popular and powerful than it was in the previous decade—allows the makers of digital twins to scale their models up or down, or create more clones for experimentation, without investing in obscene amounts of hardware. Now, too, digital twins can incorporate artificial intelligence and machine learning to help make sense of the deluge of data points pouring in every second.
Out of those ingredients, Raytheon decided to build its JWST twin for the same reason it also works on defense twins: there was little room for error. “This was a no-fail mission,” says Casey. The twin tracks 800 million data points about its real-world sibling every day, using all those 0s and 1s to create a real-time video that’s easier for humans to monitor than many columns of numbers.
The JWST team uses the twin to monitor the observatory and also to predict the effects of changes like software updates. When testing these, engineers use an offline copy of the twin, upload hypothetical changes, and then watch what happens next. The group also uses an offline version to train operators and to troubleshoot IRL issues—the nature of which Casey declines to identify. “We call them anomalies,” she says.
Science, defense, and beyond
JWST’s digital twin is not the first space-science instrument to have a simulated sibling. A digital twin of the Curiosity rover helped NASA solve the robot’s heat issues. At CERN, the European particle accelerator, digital twins help with detector development and more mundane tasks like monitoring cranes and ventilation systems. The European Space Agency wants to use Earth observation data to create a digital twin of the planet itself.
At the Gran Telescopio Canarias, the world’s largest single-mirror telescope, the scientific team started building a twin about two years ago—before they’d even heard the term. Back then, Luis Rodríguez, head of engineering, came to Romano Corradi, the observatory’s director. “He said that we should start to interconnect things,” says Corradi. They could snag principles from industry, suggested Rodríguez, where machines regularly communicate with each other and with computers, monitor their own states, and automate responses to those states.
The team started adding sensors that relayed information about the telescope and its environment. Understanding the environmental conditions around an observatory is “fundamental in order to operate a telescope,” says Corradi. Is it going to rain, for instance, and how is temperature affecting the scope’s focus?
After they had the sensors feeding data online, they created a 3D model of the telescope that rendered those facts visually. “The advantage is very clear for the workers,” says Rodríguez, referring to those operating the telescope. “It’s more easy to manage the telescope. The telescope in the past was really, really hard because it’s very complex.”
Right now, the Gran Telescopio twin just ingests the data, but the team is working toward a more interpretive approach, using AI to predict the instrument’s behavior. “With information you get in the digital twin, you do something in the real entity,” Corradi says. Eventually, they hope to have a “smart telescope” that responds automatically to its situation.
Corradi says the team didn’t find out that what they were building had a name until they went to an Internet of Things conference last year. “We saw that there was a growing community in industry—and not in science, in industry—where everybody now is doing these digital twins,” he says.
The concept is, of course, creeping into science—as the particle accelerators and space agencies show. But it’s still got a firmer foothold at corporations. “Always the interest in industry precedes what happens in science,” says Corradi. But he thinks projects like theirs will continue to proliferate in the broader astronomy community. For instance, the group planning the proposed Thirty Meter Telescope, which would have a primary mirror made up of hundreds of segments, called to request a presentation on the technology. “We just anticipated a bit of what was already happening in the industry,” says Corradi.
The defense industry really loves digital twins. The Space Force, for instance, used one to plan Tetra 5, an experiment to refuel satellites. In 2022, the Space Force also gave Slingshot Aerospace a contract to create a digital twin of space itself, showing what’s going on in orbit to prepare for incidents like collisions.
Isaacs cites an example in which the Air Force sent a retired plane to a university so researchers could develop a “fatigue profile”—a kind of map of how the aircraft’s stresses, strains, and loads add up over time. A twin, made from that map, can help identify parts that could be replaced to extend the plane’s life, or to design a better plane in the future. Companies that work in both defense and science—common in the space industry in particular—thus have an advantage, in that they can port innovations from one department to another.
JWST’s twin, for instance, will have some relevance for projects on Raytheon’s defense side, where the company already works on digital twins of missile defense radars, air-launched cruise missiles, and aircraft. “We can reuse parts of it in other places,” Casey says. Any satellite the company tracks or sends commands to “could benefit from piece-parts of what we’ve done here.”
Some of the tools and processes Raytheon developed for the telescope, she continues, “can copy-paste to other programs.” And in that way, the JWST digital twin will probably have twins of its own.
Sarah Scoles is a Colorado-based science journalist and the author, most recently, of the book Countdown: The Blinding Future of Nuclear Weapons.
Article link: https://www.linkedin.com/posts/mit-technology-review_digital-twins-are-helping-scientists-run-activity-7210230903908790272-cIQ4?
Rob MorrisForbes Councils Member
Forbes Business CouncilCOUNCIL POST

People have been using chatbots for decades, well before ChatGPT was released. One of the first chatbots was created in 1966, by Joseph Weizenbaum at MIT. It was called ELIZA, and it was designed to mimic the behaviors of a psychotherapist. Though Weizenbaum had no intention of using ELIZA for actual therapy (in fact, he rebelled against this idea), the concept was compelling.
Now, almost 60 later, we are still imagining ways in which machines might help provide mental health support.
Indeed, AI offers many exciting new possibilities for mental health care. But understanding its benefits—while navigating its risks—is a complex challenge.
We can explore potential applications of AI and mental health by looking at two fundamental use cases: those related to the provider, and those related to the client.
Provider-Facing Opportunities
Training
AI can be used to help train mental health practitioners by simulating interactions between clients and patients. For instance, ReflexAI uses AI to create a safe training environment for crisis line volunteers. Instead of learning in the moment, with a real caller, volunteers can rehearse and refine their skills with an AI agent.
Quality Monitoring
AI can also help monitor conversations between providers and clients. It has always been difficult to assess whether providers are adhering to evidence-based practices. AI has the potential to provide immediate feedback and suggestions to help improve client-provider interactions. Lyssn applies this concept to various domains, including crisis services like 988.
Suggested Responses
AI can also provide in-the-moment advice, offering suggestions and resources for providers. This could be especially helpful for crisis counselors, where the need to provide timely and empathetic feedback is extremely important.
Detection
There is also research suggesting that some mental health conditions can be inferred from various signals, such as one’s tone of voice, speech patterns and facial expressions. These biomarkers have the potential to greatly facilitate screening for mental health. A good example is the work being done by Ellipsis Health.
Administrative
While less exciting and attention-grabbing than other opportunities, the greatest potential for near-term impact might relate to easing administrative burden. Companies like Eleos Health are turning behavioral health conversations into automated documentation and detailed clinical insights.
Client-Facing Opportunities
Chatbots like Woebot and Wysa are already capable of delivering evidence-based mental health support. However, as of this writing, they do not use generative AI. Instead, they guide the user through carefully crafted pre-scripted interactions. Let’s call this ELIZA 2.0.
But there are now several startups exploring something like ELIZA 3.0, where generative AI conducts the entire therapeutic process. Generative AI offers the potential to provide rich, nuanced interactions with the user, ideally improving the effectiveness of online interventions.
Users are also given much more control of the experience, potentially redirecting the chatbot toward different therapeutic approaches, as needed. New startups are already seizing this opportunity. For example, Sonia uses large language models to mimic cognitive-behavioral therapy.
Other companies (such as Replika and Character.ai) are providing companion bots that form bonds with end users, offering kind words and support. These AI chatbots are not trained to deliver anything resembling traditional therapy, but some users believe they have therapeutic benefits.
Risks
It seems clear that AI is well-positioned to enhance many elements of mental health care. However, significant risks remain for nearly all of the opportunities described thus far.
For providers, AI could become a crutch—something that is increasingly used without human scrutiny. For example, the current state-of-the-art models lack the situational awareness of providers to recognize complex and potentially very dangerous shifts in mood and behaviors.
In high-stakes environments, such as crisis helplines, AI could cause dangerous unanticipated consequences. This has happened before. For instance, a simple helpline bot designed to treat eating disorders was accidentally connected to a generative AI model, without the awareness of the researchers who were involved. Unfortunately, the bot then proceeded to advocate unhealthy diets and exercise for individuals struggling with disordered eating. Here, the root failure was likely a coordination issue between different organizations, but it exemplifies ways in which AI could be dangerous in real-world deployments.
For clients, AI has the potential to mislead users into thinking they are receiving acceptable care. A therapist bot may contend that it has clinical training and an advanced degree. Most users will probably know this is false, but these platforms tend to attract young people, and many may decide not to speak up about their mental health, believing these platforms provide sufficient care.
There are, of course, many other issues to consider, such as data privacy.
As we continue to explore AI in mental health, it’s important to balance its potential benefits with careful consideration of the risks to ensure it truly helps those in need.
Article link: https://www.forbes.com/sites/forbesbusinesscouncil/2024/06/21/the-evolution-of-ai-and-mental-healthcare/
2024-06-14

The US government’s CHIPS and Science Act is reportedly injecting funds into chip manufacturing at an unprecedented rate. According to a recent report by the U.S. Census Bureau, the growth rate of construction funding for computer and electrical manufacturing is remarkably high. The amount of money the government is pouring into this industry in 2024 alone is equivalent to the total of the previous 27 years combined.
Due to the substantial funding provided by the U.S. CHIPS Act, the construction industry in the United States is experiencing explosive growth. Companies such as TSMC, Intel, Samsung, and Micron have received billions of dollars to build new plants in the U.S.

Research by the Semiconductor Industry Association indicates that the U.S. will triple its domestic semiconductor manufacturing capacity by 2032. It is also projected that by the same year, the U.S. will produce 28% of the world’s advanced logic (below 10nm) manufacturing, surpassing the goal of producing 20% of the world’s advanced chips announced by U.S. Commerce Secretary Gina Raimondo.
Currently, new plant constructions are underway. Despite the enormous expenditures, there have been delays in construction across the United States, affecting plants of Samsung, TSMC, and Intel.
Notably, a previous report from South Korean media BusinessKorea revealed Samsung has postponed the mass production timeline of the fab in Taylor, Texas, US from late 2024 to 2026. Similarly, a report from TechNews, which cited a research report from the Center for Security and Emerging Technology (CSET), noted the postponement of the production of two plants in Arizona, US. Additionally, Intel, as per a previous report from the Wall Street Journal (WSJ), was also said to be delaying the construction timetable for its chip-manufacturing project in Ohio.
Read more
Jeremiah Budin
Sat, June 15, 2024 at 5:00 AM EDT

Scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley may have just unlocked the secret to making electronic devices smaller and more efficient, using tiny electronic components called microcapacitors, SciTechDaily reported.
The microcapacitors could allow energy to be stored directly on the devices’ microchips, minimizing the energy losses that occur as energy is transferred between the devices’ different parts.
The Berkeley scientists engineered the microcapacitors with thin films of hafnium oxide and zirconium oxide, achieving record-high energy and power densities. They published their findings in the journal Nature.
As with many scientific discoveries, the researchers behind it had been working on the problem for years but ended up being surprised with how good their results were.
“The energy and power density we got are much higher than we expected,” said Sayeef Salahuddin, the UC Berkeley professor who led the project. “We’ve been developing negative capacitance materials for many years, but these results were quite surprising.”
The discovery could lead directly to smaller and more efficient electronic devices, such as phones, sensors, personal computers, and more.
“With this technology, we can finally start to realize energy storage and power delivery seamlessly integrated on-chip in very small sizes,” said Suraj Cheema, one of the researchers and co-lead author of the paper. “It can open up a new realm of energy technologies for microelectronics.”
Much of the research happening in and around the realm of clean energy is currently focused on improving battery technology — and as we have seen, this problem can be attacked from a wide variety of angles, from making batteries more efficient, to creating them using less environmentally harmful materials, to devising ways to manufacture them more cheaply.
All of these approaches, in their own way, aid in the efforts to move beyondthe dirty energy sources — mainly gas and oil — that contribute massive amounts of air pollution and have led to the overheating of our planet. By making clean energy and battery storage more viable, any one of these breakthroughs could potentially have a huge positive impact.
Article link: https://www.yahoo.com/tech/scientists-significant-breakthrough-microchip-technology-090000848.html

By: Troy Tazbaz, Director, Digital Health Center of Excellence (DHCoE), Center for Devices and Radiological Health, U.S. Food and Drug Administration

Artificial intelligence (AI) is rapidly changing the health care industry and holds transformative potential. AI could significantly improve patient care and medical professional satisfaction and accelerate and advance research in medical device development and drug discovery.
AI also has the potential to drive operational efficiency by enabling personalized treatments and streamlining health care processes.
At the FDA, we know that appropriate integration of AI across the health care ecosystem will be paramount to achieving its potential while reducing risks and challenges. The FDA is working across medical product centers regarding the development and use of AI as we noted in the paper, Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together.
DHCoE wants to foster responsible AI innovations in health care while ensuring these technologies, when intended for use as medical devices, are safe and effective for the end-users, including patients. Additionally, we seek to foster a collaborative approach and alignment within the health care ecosystem around AI in health care.
AI Development Lifecycle Framework Can Reduce Risk
There are several ways to achieve this. First, agreeing on and adopting standards and best practices at the health care sector level for the AI development lifecycle, as well as risk management frameworks, can help address risks associated with the various phases of an AI model.
This includes, for instance, approaches to ensure that data suitability, collection, and quality match the intent and risk profile of the AI model that is being trained. This could significantly reduce the risks of these models and support their providing appropriate, accurate, and beneficial recommendations.
Additionally, the health care community together could agree on common methodologies that provide information to a diverse range of end users (including patients), and on how the model was trained, deployed, and managed through robust monitoring tools and operational discipline. This includes proper communication of the model’s reasoning that will help build trust and assurance for people and organizations for successful adoption of AI.
Enabling Quality Assurance of AI in Health Care
To positively impact clinical outcomes with the use of AI models that are accurate, reliable, ethical, and equitable, development of a quality assurance practice for AI models should be a priority.
Top of mind for device safety is quality assurance applied across the lifecycle of a model’s development and use in health care. Continuous performance monitoring before, during, and after deployment is one way to accomplish this, as well as by identifying data quality and performance issues before the model’s performance becomes unsatisfactory.
How can we go about achieving our shared goals of assurance, quality, and safety? At the FDA, we’ve discussed several concepts to help promote this process for the use of AI in medical devices. We plan to expand on these concepts through future publications like this one, including discussing the following topics:
- Standards, best practices, and operational tools
- Quality assurance laboratories
- Transparency and accountability
- Risk management for AI models in health care
Generally, standards, best practices, and tools can help support responsible AI development, and can help provide clinicians, patients, and other end users with quality assurance for the products they need.
Principles such as transparency and accountability can help stakeholders feel comfortable with AI technologies. Quality assurance and risk management, right-sized for health care institutions of all sizes, can help provide confidence that AI models are developed, tested, and evaluated on data that is representative of the population for which they are intended.
Shared Responsibility on AI Quality Assurance is Essential to Success
Efforts around AI quality assurance have sprung up at a grassroots-level across the U.S. and are starting to bear fruit. Solution developers, health care organizations, and the U.S. federal government are working to explore and develop best practices for quality assurance of AI in health care settings.
These efforts, combined with FDA activities relating to AI-enabled devices, may lead to a world in which AI in health care settings is safe, clinically useful, and aligned with patient safety and improvement in clinical outcomes.
Our “virtual” doors at the DHCoE are always open, and we welcome your comments and feedback related to AI use in health care. Email us at digitalhealth@fda.hhs.gov, noting “Attn: AI in health care,” in the subject line.
Article link: https://www.linkedin.com/posts/fda_proper-integration-of-ai-across-the-health-activity-7208524345944485888-TfVQ?
| Nov. 27, 2023
This essay was originally presented at an AI Cyber Lunch on September 20, 2023, at Harvard Kennedy School.
I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.
Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.
In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.
Okay, so let’s back up and take that all a lot slower. Trust is a complicated concept, and the word is overloaded with many meanings. There’s personal and intimate trust. When we say that we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. We trust their intentions, and know that those intentions will inform their actions. Let’s call this “interpersonal trust.”
There’s also the less intimate, less personal trust. We might not know someone personally, or know their motivations—but we can trust their behavior. We don’t know whether or not someone wants to steal, but maybe we can trust that they won’t. It’s really more about reliability and predictability. We’ll call this “social trust.” It’s the ability to trust strangers.
Interpersonal trust and social trust are both essential in society today. This is how it works. We have mechanisms that induce people to behave in a trustworthy manner, both interpersonally and socially. This, in turn, allows others to be trusting. Which enables trust in society. And that keeps society functioning. The system isn’t perfect—there are always going to be untrustworthy people—but most of us being trustworthy most of the time is good enough.
I wrote about this in 2012 in a book called Liars and Outliers. I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.
What I didn’t appreciate is how different the first and last two are. Morals and reputation are person to person, based on human connection, mutual vulnerability, respect, integrity, generosity, and a lot of other things besides. These underpin interpersonal trust. Laws and security technologies are systems of trust that force us to act trustworthy. And they’re the basis of social trust.
Taxi driver used to be one of the country’s most dangerous professions. Uber changed that. I don’t know my Uber driver, but the rules and the technology lets us both be confident that neither of us will cheat or attack each other. We are both under constant surveillance and are competing for star rankings.
Lots of people write about the difference between living in a high-trust and a low-trust society. How reliability and predictability make everything easier. And what is lost when society doesn’t have those characteristics. Also, how societies move from high-trust to low-trust and vice versa. This is all about social trust.
That literature is important, but for this talk the critical point is that social trust scales better. You used to need a personal relationship with a banker to get a loan. Now it’s all done algorithmically, and you have many more options to choose from.
Social trust scales better, but embeds all sorts of bias and prejudice. That’s because, in order to scale, social trust has to be structured, system- and rule-oriented, and that’s where the bias gets embedded. And the system has to be mostly blinded to context, which removes flexibility.
But that scale is vital. In today’s society we regularly trust—or not—governments, corporations, brands, organizations, groups. It’s not so much that I trusted the particular pilot that flew my airplane, but instead the airline that puts well-trained and well-rested pilots in cockpits on schedule. I don’t trust the cooks and waitstaff at a restaurant, but the system of health codes they work under. I can’t even describe the banking system I trusted when I used an ATM this morning. Again, this confidence is no more than reliability and predictability.
Think of that restaurant again. Imagine that it’s a fast-food restaurant, employing teenagers. The food is almost certainly safe—probably safer than in high-end restaurants—because of the corporate systems or reliability and predictability that is guiding their every behavior.
That’s the difference. You can ask a friend to deliver a package across town. Or you can pay the Post Office to do the same thing. The former is interpersonal trust, based on morals and reputation. You know your friend and how reliable they are. The second is a service, made possible by social trust. And to the extent that is a reliable and predictable service, it’s primarily based on laws and technologies. Both can get your package delivered, but only the second can become the global package delivery systems that is FedEx.
Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability—social trust.
But because we use the same word for both, we regularly confuse them. And when we do that, we are making a category error.
And we do it all the time. With governments. With organizations. With systems of all kinds. And especially with corporations.
We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.
So corporations regularly take advantage of their customers, mistreat their workers, pollute the environment, and lobby for changes in law so they can do even more of these things.
Both language and the laws make this an easy category error to make. We use the same grammar for people and corporations. We imagine that we have personal relationships with brands. We give corporations some of the same rights as people.
Corporations like that we make this category error—see, I just made it myself—because they profit when we think of them as friends. They use mascots and spokesmodels. They have social media accounts with personalities. They refer to themselves like they are people.
But they are not our friends. Corporations are not capable of having that kind of relationship.
We are about to make the same category error with AI. We’re going to think of them as our friends when they’re not.
A lot has been written about AIs as existential risk. The worry is that they will have a goal, and they will work to achieve it even if it harms humans in the process. You may have read about the “paperclip maximizer“: an AI that has been programmed to make as many paper clips as possible, and ends up destroying the earth to achieve those ends. It’s a weird fear. Science fiction author Ted Chiang writes about it. Instead of solving all of humanity’s problems, or wandering off proving mathematical theorems that no one understands, the AI single-mindedly pursues the goal of maximizing production. Chiang’s point is that this is every corporation’s business plan. And that our fears of AI are basically fears of capitalism. Science fiction writer Charlie Stross takes this one step further, and calls corporations “slow AI.” They are profit maximizing machines. And the most successful ones do whatever they can to achieve that singular goal.
And near-term AIs will be controlled by corporations. Which will use them towards that profit-maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us.
This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.
Your Google search results lead with URLs that someone paid to show to you. Your Facebook and Instagram feeds are filled with sponsored posts. Amazon searches return pages of products whose sellers paid for placement.
This is how the Internet works. Companies spy on us as we use their products and services. Data brokers buy that surveillance data from the smaller companies, and assemble detailed dossiers on us. Then they sell that information back to those and other companies, who combine it with data they collect in order to manipulate our behavior to serve their interests. At the expense of our own.
We use all of these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.
It’s going to be no different with AI. And the result will be much worse, for two reasons.
The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.
This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers? When you asked it to explain a political issue, did it bias that explanation towards the company’s position? Or towards the position of whichever political party gave it the most money? The conversational interface will help hide their agenda.
The second reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.
And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.
You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.
The natural language interface is critical here. We are primed to think of others who speak our language as people. And we sometimes have trouble thinking of others who speak a different language that way. We make that category error with obvious non-people, like cartoon characters. We will naturally have a “theory of mind” about any AI we talk with.
More specifically, we tend to assume that something’s implementation is the same as its interface. That is, we assume that things are the same on the inside as they are on the surface. Humans are like that: we’re people through and through. A government is systemic and bureaucratic on the inside. You’re not going to mistake it for a person when you interact with it. But this is the category error we make with corporations. We sometimes mistake the organization for its spokesperson. AI has a fully relational interface—it talks like a person—but it has an equally fully systemic implementation. Like a corporation, but much more so. The implementation and interface are more divergent of anything we have encountered to date…by a lot.
And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.
It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.
We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.
It’s no accident that these corporate AIs have a human-like interface. There’s nothing inevitable about that. It’s a design choice. It could be designed to be less personal, less human-like, more obviously a service—like a search engine . The companies behind those AIs want you to make the friend/service category error. It will exploit your mistaking it for a friend. And you might not have any choice but to use it.
There is something we haven’t discussed when it comes to trust: power. Sometimes we have no choice but to trust someone or something because they are powerful. We are forced to trust the local police, because they’re the only law enforcement authority in town. We are forced to trust some corporations, because there aren’t viable alternatives. To be more precise, we have no choice but to entrust ourselves to them. We will be in this same position with AI. We will have no choice but to entrust ourselves to their decision-making.
The friend/service confusion will help mask this power differential. We will forget how powerful the corporation behind the AI is, because we will be fixated on the person we think the AI is.
So far, we have been talking about one particular failure that results from overly trusting AI. We can call it something like “hidden exploitation.” There are others. There’s outright fraud, where the AI is actually trying to steal stuff from you. There’s the more prosaic mistaken expertise, where you think the AI is more knowledgeable than it is because it acts confidently. There’s incompetency, where you believe that the AI can do something it can’t. There’s inconsistency, where you mistakenly expect the AI to be able to repeat its behaviors. And there’s illegality, where you mistakenly trust the AI to obey the law. There are probably more ways trusting an AI can fail.
All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.
The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.
It’s government that provides the underlying mechanisms for the social trust essential to society. Think about contract law. Or laws about property, or laws protecting your personal safety. Or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.
The more you can trust that your societal interactions are reliable and predictable, the more you can ignore their details. Places where governments don’t provide these things are not good places to live.
Government can do this with AI. We need AI transparency laws. When it is used. How it is trained. What biases and tendencies it has. We need laws regulating AI—and robotic—safety. When it is permitted to affect the world. We need laws that enforce the trustworthiness of AI. Which means the ability to recognize when those laws are being broken. And penalties sufficiently large to incent trustworthy behavior.
Many countries are contemplating AI safety and security laws—the EU is the furthest along—but I think they are making a critical mistake. They try to regulate the AIs and not the humans behind them.
AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do. Regardless of whether it was due to humans, or AI, or a combination of both. Maybe that won’t be true forever, but it will be true in the near future. If we want trustworthy AI, we need to require trustworthy AI controllers.
We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants.
And we need one final thing: public AI models. These are systems built by academia, or non-profit groups, or government itself, that can be owned and run by individuals.
The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model that the government has licensed. It’s not even an open-source model that the public is free to examine and modify.
A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.
We can never make AI into our friends. But we can make them into trustworthy services—agents and not double agents. But only if government mandates it. We can put limits on surveillance capitalism. But only if government mandates it.
Because the point of government is to create social trust. I started this talk by explaining the importance of trust in society, and how interpersonal trust doesn’t scale to larger groups. That other, impersonal kind of trust—social trust, reliability and predictability—is what governments create.
To the extent a government improves the overall trust in society, it succeeds. And to the extent a government doesn’t, it fails.
But they have to. We need government to constrain the behavior of corporations and the AIs they build, deploy, and control. Government needs to enforce both predictability and reliability.
That’s how we can create the social trust that society needs to thrive.
For more information on this publication:Belfer Communications Office
For Academic Citation: Schneier, Bruce.“AI and Trust.” Belfer Center for Science and International Affairs, Harvard Kennedy School, November 27, 2023.
Arianna JohnsonForbes Staff
Johnson is a reporter on the Forbes news desk
Pope Francis is set to attend the G7 summit on Friday and is expected to urge world leaders to adopt AI regulations, a subject the Pope has spoken about several times in the past, including after he was the subject of viral AI-generated images that many believed were real.

KEY FACTS
The Vatican announced Pope Francis would attend the Group of 7 conference in Italy on Friday to discuss ethical concerns surrounding artificial intelligence during a session dedicated to AI, becoming the first pope to participate in the summit of leaders.
The Pope fell victim to AI in the past: AI-generated deepfake images of the Pope in a white puffer jacket and bedazzled crucifix—dubbed the “Balenciaga Pope”—went viral last year and racked up millions of views online, causing some people to believe the pictures were real.
He spoke about the fake images during a speech in Vatican City in January, warning about the rise of “images that appear perfectly plausible but false (I too have been an object of this).”
Pope Francis has spoken out about the danger of AI before, and he’s expected to urge world leaders at the G7 conference to work together to create AI regulations.
During the G7 meetings, Italy is expected to advocate for the development of homegrown AI systems in African countries, further work is expected to be done on the Hiroshima Process—a G7 effort to safeguard the use of generative AI—and leaders from places like the U.S. and the U.K. are expected to promote AI regulations introduced in their countries, according to Politico.
Giorgia Meloni, Italy’s prime minister, said in a statement in April the Pope was invited to the G7 conference to help “make a decisive contribution to defining a regulatory, ethical and cultural framework for artificial intelligence.”
The Vatican also announced Pope Francis will have bilateral conversations with leaders of other countries, including President Joe Biden, President Samoei Ruto of Kenya and India’s Prime Minister Narenda Modi.
KEY BACKGROUND
The Pope has been speaking out about the need for artificial intelligence regulation for years. The Vatican has been promoting the “Rome Call for AI Ethics” since 2020, which lays out six principles for AI ethics, which include transparency, inclusion, impartiality, responsibility, reliability and security and privacy. As part of the August 2023 announcement for this year’s World Day of Peace of the Catholic Church—which was held on Jan. 1—-the Pope warned of the dangers of AI,saying it should be used as a “service of humanity.” He called for “an open dialogue on the meaning of these new technologies, endowed with disruptive possibilities and ambivalent effects.” In December 2023, the Pope called for an international treaty to regulate AI as part of his World Day of Peace message. He urged world leaders to “adopt a binding international treaty” to regulate AI development, adding it shouldn’t just focus on preventing harm, but should also encourage “best practices.” The Pope noted that although advancements in technology and science lead to the betterment of humanity,” they can also give humans “unprecedented control over reality.”
TANGENT
Italy—one of the G7 summit’s rotating hosts—became the first country to temporarily ban AI chatbot ChatGPT in March 2023 after Garante, an Italian data protection regulator, claimed the chatbot violated the European Union’s privacy laws. Garante claimed ChatGPT exposed payment information and messages, and allowed children to access inappropriate information. Other countries that have passed or introduced laws regulating AI include Australia, China, the European Union, the U.S., Japan and the U.K.
FURTHER READING
Pope Warns Artificial Intelligence Could ‘Fuel Conflicts And Antagonism’ (Forbes)