The Office of the Secretary of Defense is requesting $75 million in fiscal 2024 to initiate a brand new pursuit intended to both accelerate the commercialization and operationalization of quantum devices for Pentagon purposes, and mature the U.S. supply chain underpinning the making of emerging quantum technologies.
Tucked into the Defense Department’s latest batch of budget justification documents, this new-start project is referred to as Quantum Transition Acceleration.
“The [DOD’s] research and development of quantum technologies is critical to maintaining the nation’s technological superiority,” officials wrote in the Defense-wide justification book for fiscal 2024 budget estimates.
Broadly, quantum information science (QIS) encompasses the investigation and application of complex phenomena happening at atomic and subatomic levels to process and transmit information.
Experts largely predict that this field will enable disruptive, transformational science, engineering and communication applications in the not-so-distant future.
“Quantum technology is approaching a tipping point that will determine how quickly it can make an impact. If the [U.S.] can stay on pace, many important outcomes for the [DOD] can be realized including robust position, navigation and timing for DOD freedom of operations with precision strike even with contests in spectrum, space, or cyber operations,” Pentagon officials wrote in the budget justification documents.
They further noted that quantum computation could lead to “rapid advances in materials and chemistry for advanced energetics, propulsion, and platform coatings” — as well as enable nascent optimization techniques for stealth properties, logistics and machine learning.
Quantum tech might also drastically enhance electromagnetic spectrum capabilities, which they said holds promise to supply DOD with “significant advantages” associated with electronic warfare, intelligence collection and more.
For a number of reasons at this point, however, the department recognizes “risk” for the slowdown of technological maturation affiliated with quantum applications for defense.
“Two challenges and barriers to implementation are: component and supply chain maturity of bleeding-edge capability in photonics, including lasers, active light manipulation, light delivery, and packaging; and misalignment of government with industry regarding quantum technology development priorities, maturity time-line realism, and technology protection strategy,” officials wrote in the budget justification documents.
The Pentagon seeks to alleviate those major issues via the new Quantum Transition Acceleration project.
Of the $75 million requested for fiscal 2024 to fund that work, $45 million would be used for “maturing, demonstrating, and transitioning quantum inertial sensors, gravity sensors, atomic clocks, and quantum electro-magnetic sensors,” officials wrote, noting that those specific technologies would be “sourced from existing projects that have already demonstrated performance advantages.”
The other $30 million would focus on “identifying, developing and maturing critical components supporting technology for atomic clocks, quantum sensors, and quantum computers” — and ultimately help “accelerate the transition of laboratory-scale systems to manufacturable commercial products,” per the budget justification documents.
On top of the $75 million requested for the Quantum Transition Acceleration initiative in fiscal 2024, the department also projects that it will request $100 million per year in the fiscal 2025-2028 time frame to continue to push it forward.
“This investment will be key to help the U.S. stay competitive with other nations, not only in quantum computing, but also in many other quantum-enabled technologies — such as entanglement-based sensing capabilities, secure communications and computing, secure access to the quantum cloud, and many more applications — which have key national security implications,” University of Arizona professor Saikat Guha told DefenseScoop in an email on Wednesday.
A leading expert in this field, Guha also serves as the director of the National Science Foundation’s Engineering Research Center for Quantum Networks, or CQN. This week, he’s hosting the Arizona Quantum Initiative Inaugural Workshop to bring together those interested in the technology, and spotlight some of the university’s latest research findings.
Guha, as well as other University of Arizona-affiliated scientists and students, are working directly with multiple Pentagon components to generate and deploy quantum-enabled solutions.
“Although the technologies are in various stages of development, we envision some to have an impact in DOD’s capabilities and provide the U.S. a leading edge over its adversaries,” he told DefenseScoop.
With the Office of Naval Research, for example, Guha and his team are “using squeezed light — a form of light whose properties can only be described by the quantum theory of optics — to enhance the sensitivity of multiple photonic sensor modalities,” he said. That work could eventually lead to fiber-optic gyroscopes for position and navigation in GPS-denied environments and quantum-enhanced radio-frequency photonic antennas to detect hidden signals that are inaccessible otherwise, among other nascent capabilities.
There are other ongoing, quantum-focused efforts the university is leading with the Army, Defense Advanced Research Projects Agency and others.
While he was pleased to see the new-start quantum program’s inclusion in the 2024 defense budget estimate, Guha said he “would love to see more concerted investments and programs tailored to transitioning technologies to the end users — both in the government as well as the industry.”
“In my experience, there is a lot of very high impact work that comes out of [DARPA’s Defense Sciences Office] programs, which do not make their way to the transition partners in a natural way. Many quantum-enabled technologies are ready for such transition,” Guha told DefenseScoop.
Businesses put an awful lot of effort into meeting the diverse needs of their stakeholders — customers, investors, employees, and society at large. But they’re not paying enough attention to one ingredient that’s crucial to productive relationships with those stakeholders: trust.
Trust, as defined by organizational scholars, is our willingness to be vulnerable to the actions of others because we believe they have good intentions and will behave well toward us. In other words, we let others have power over us because we think they won’t hurt us and will in fact help us. When we decide to interact with a company, we believe it won’t deceive us or abuse its relationship with us. However, trust is a double-edged sword. Our willingness to be vulnerable also means that our trust can be betrayed. And over and over, businesses have betrayed stakeholders’ trust.
Consider Facebook. In April 2018, CEO Mark Zuckerberg came beforeCongress and was questioned about Facebook’s commitment to data privacy after it came to light that the company had exposed the personal data of 87 million users to the political consultant Cambridge Analytica, which used it to target voters during the 2016 U.S. presidential election. Then, in September, Facebook admitted that hackers had gained access to the log-in information of 50 million of its users. The year closed out with a New York Times investigation revealing that Facebook had given Netflix, Spotify, Microsoft, Yahoo, and Amazon access to its users’ personal data, including in some cases their private messages.
So, in the middle of last year, when Zuckerberg announced that Facebook would launch a dating app, observers shook their heads. And this past April, when the company announced it was releasing an app that allowed people to share photos and make video calls on its smart-home gadget Portal, TechCrunch observed that “critics were mostly surprised by the device’s quality but too freaked out to recommend it.” Why would we trust Facebook with personal data on something as sensitive as dating — or with a camera and microphone — given its horrible track record?
Volkswagen is still struggling with the aftermath of the 2015 revelation that it cheated on emissions tests. United Airlines has yet to fully recover from two self-inflicted wounds: getting security to drag a doctor off a plane after he resisted giving up his seat in 2017, and the death of a puppy on a plane in 2018 after a flight attendant insisted its owner put it in an overhead bin. In the spring of 2019 Boeing had to be forced by a presidential order to ground its 737 Max jets in the United States, even though crashes had killed everyone on board two planes in five months and some 42 other countries had forbidden the jets to fly. Later the news broke that Boeing had known there was a problem with the jet’s safety features as early as 2017 but failed to disclose it. Now, customers, pilots and crew, and regulators all over the world are wondering why they should trust Boeing. Whose interests was it serving?
Betrayals of trust have major financial consequences. In 2018 the Economist studied eight of the largest recent business scandals, comparing the companies involved with their peer groups, and found that they had forfeited significant amounts of value. The median firm was worth 30% less than it would have been valued had it not experienced a scandal. That same year another study, by IBM Security and Ponemon Institute, put the average cost of a data breach at $3.86 million, a 6.4% increase over the year before, and calculated that on average each stolen record cost a company $148.
Creating trust, in contrast, lifts performance. In a 1999 study of Holiday Inns, 6,500 employees rated their trust in their managers on a scale of 1 to 5. The researchers found that a one-eighth point improvement in scores could be expected to increase an inn’s annual profits by 2.5% of revenues, or $250,000 more per hotel. No other aspect of managers’ behavior had such a large impact on profits.
Trust also has macro-level benefits. A 1997 study of 29 market economies across one decade by World Bank economists showed that a 10-percentage-point increase in trust in an environment was correlated with a 0.8-percentage-point bump in per capita income growth.
So our need to trust and be trusted has a very real economic impact. More than that, it deeply affects the fabric of society. If we can’t trust other people, we’ll avoid interacting with them, which will make it hard to build anything, solve problems, or innovate.
Building trust isn’t glamorous or easy. And at times it involves making complex decisions and difficult trade-offs.
In her 15 years of research into what trusted companies do, Sandra has found — no surprise — that they have strong relationships with all their main stakeholders. But the behaviors and processes that built those relationships weresurprising. She has distilled her findings into a framework that can help companies nurture and maintain trust. It explains the basic promises stakeholders expect a company to keep, the four ways they evaluate companies for trustworthiness, and five myths that prevent companies from rebuilding trust.
What Stakeholders Want
Companies can’t build trust unless they understand the fundamental promises they make to stakeholders. Firms have three kinds of responsibilities: Economically, people count on them to provide value. Legally,people expect them to follow not just the letter of the law but also its spirit. Ethically, people want companies to pursue moral ends, through moral means, for moral motives.
What this looks like varies with each kind of stakeholder. To customers, for instance, economic value means creating products and services that enhance their lives; to employees, it means a livelihood; to investors, it means returns; and to society, it means both fulfilling important needs and providing growth and prosperity. Here’s the complete set of stakeholder expectations:
The Fundamental Promises of Business
STAKEHOLDER: CUSTOMERSEconomic Promise
To provide products and services that enhance their lives
Legal Promise
To follow consumer protection laws and industry regulations
Ethical Promise
To make good on commitments
To disclose risks
To remediate mistakes made or harm done
STAKEHOLDER: EMPLOYEESEconomic Promise
To provide a livelihood (pay, benefits, training, opportunity)
Legal Promise
To follow labor, antidiscrimination, and workplace safety laws
Ethical Promise
To provide safe work conditions and job security
To treat everyone fairly
STAKEHOLDER: INVESTORSEconomic Promise
To provide returns
To manage risk
Legal Promise
To fulfill fiduciary duties
To disclose material information
Ethical Promise
To oversee employees’ conduct
To abstain from insider trading and self-dealing
STAKEHOLDER: SOCIETYEconomic Promise
To offer employment and economic development
To fulfill important needs
Legal Promise
To follow local and federal laws
To work with regulators
Ethical Promise
To protect public health, the environment, and the local community
To set industry standards
Of course, expectations can vary within a stakeholder group, leading to ambiguity about what companies need to live up to. Investors are a prime example. Some believe the only duty of a company is to maximize shareholder returns, while others think companies have an obligation to create positive societal effects by employing sound environmental, social, and governance practices.
How Stakeholders Evaluate Trust
Trust is multifaceted: Not only do stakeholders depend on businesses for different things, but they may trust an organization in some ways but not others. To judge the worthiness of companies, stakeholders continually ask four questions. Let’s look at each in turn.
Is the company competent?
At the most fundamental level companies are evaluated on their ability to create and deliver a product or service. There are two aspects to this:
Technical competence refers to the nuts and bolts of developing, manufacturing, and selling products or services. It includes the ability to innovate, to harness technological advances, and to marshal resources and talent.
Social competence involves understanding the business environment and sensing and responding to changes. A company must have insight into different markets and what offerings may be attractive to them now and in the future. It also needs to recognize how competition is shifting and know how to work with partners such as suppliers, government authorities, regulators, NGOs, the media, and unions.
In the short term technical competence wins customers, but in the long run social competence is necessary to build a company that can navigate a constantly evolving business landscape.
Consider Uber. The company has weathered an avalanche of scandals, including reports of sexual harassment, a toxic corporate culture, and shady business practices in 2017, which led to CEO Travis Kalanick’s departure. Uber’s losses that year came to $4.5 billion. And yet, by the end of 2018, Uber was operating in 63 countries and had 91 million active monthly users. We love Uber, we hate Uber, and sometimes we leave Uber.
We keep using Uber not because we don’t care about its mistakes but because Uber fills a need and does it well. Consumers trust that when they put an order into Uber a car will arrive to pick them up. We forget how difficult that is to do. In 2007, two years before Uber’s launch, an app called Taxi Magic entered the market. Taxi Magic worked with fleet owners, and drivers leased cars from the fleet owners, so there was little accountability. If a cab saw another passenger on its way to pick up a Taxi Magic rider, it might abandon the Taxi Magic customer. In 2009, another start-up, Cabulous, also created an app that people could use to book rides. However, that app often didn’t work, and Cabulous had no means of regulating supply and demand, so taxi drivers wouldn’t turn on the app when they were busy. Neither business achieved anything on the scale of Uber. We might have mixed feelings about Uber’s surge pricing, but it helps make sure there are enough drivers on the road to meet demand.
Meanwhile, on a social level Uber has managed to transform the taxi industry. Before Uber, cities limited the number of taxis in the streets by requiring drivers to purchase medallions. In 2013, a medallion in New York City could cost as much as $1.32 million. Such sky-high prices made it difficult for newcomers to enter the market, and lack of competition meant drivers had little incentive to provide good service. Uber brought new drivers into the market, improved service, and increased accessibility to rides in areas with limited taxi coverage.
Still, we use Uber with mixed feelings. Uber achieved much of its growth by quickly acquiring capital, which allowed it to develop technology for fast pickups and to offer drivers high pay and riders low fares. At the same time it was a ruthless competitor that reportedly was not above using underhanded tactics, such as ordering and then cancelling Lyft rides (a charge Uber denied) and misleading drivers about their potential earnings.
We don’t trust Uber to treat its employees or customers well or to conduct business cleanly. In other words we don’t trust Uber’s motives, means, or impact. This has consequences. Although Uber was projected to reach 44 million users in 2017, it hit only 41 million. Since then Uber’s growth has continued to be lower than expected, and the company has ceded market share to Lyft. This year Uber’s much-anticipated IPO underperformed after thousands of Uber drivers went on strike to protest their working conditions. The company’s stock price fell by 11% after its first earnings report for 2019 revealed that it had lost more than $1 billion in its first quarter.
Is the company motivated to serve others’ interests as well as its own?
Stakeholders need to believe a company is doing what’s good for them, not just what’s best for itself. However, stakeholders’ concerns and goals aren’t all the same. While many actions can serve multiple parties, companies must figure out how to prioritize stakeholder interests and avoid harming one group in an attempt to benefit another.
To determine whether they’re doing right by all of their stakeholders, companies should examine their own motivations — by asking these three questions:
Do we tell the truth?
On whose behalf are we acting?
Do our actions actually benefit those who trust us?
Honeywell is an example of a company that works hard to serve — and balance — the needs of all its stakeholders. Let’s look at what happened there during the Great Recession, when it needed to reduce costs but wanted to keep making good on stakeholder expectations. Dave Cote, Honeywell’s CEO at the time, explained how the company thought about that challenge: “We have these three constituencies we have to manage. If we don’t do a great job with customers, both employees and investors will get hurt. So we said our first priority is customers. We need to make sure we can still deliver, that it’s a quality product, and that if we’ve committed to a project, it will get done on time.”
For investors and employees, he continued, “we have to balance the pain, because if you’re in the middle of a recession, there’s going to be pain….Investors need to know they can count on the company, that we’re also going to be doing all the right things for the long term, but we’re thinking about them. After all, they’re the owners of the company, and we work for them.…But at the same time we need to recognize that the employees are the base for success in the future…and we need to be thoughtful about how we treat them. And I think if you get the balance right between those two, yeah, investors might not be as happy in the short term if you could have generated more earnings, but they’re definitely going to be happier in the longer term. Employees might not be as happy in the short term because they might have preferred that you just say to heck with all the investors. But in the long term they’re going to benefit also because you’re going to have a much more robust company for them to be part of.”
During the recession, Honeywell used furloughs, rather than layoffs, to lower payroll costs. But it limited the scale and duration of the furloughs by first implementing a hiring freeze, eliminating wage increases, reducing overtime, temporarily halting the employee rewards and recognition program, and cutting the company match for 401(k)s from 100% to 50%. The company distributed a reduced bonus pool as restricted stock so that employees could share in the stock’s post-recovery upside. And Cote and his entire leadership team refused to take a bonus in 2009, reinforcing the message of shared pain.
To protect customers’ interests during the downturn, Honeywell came up with the idea of placing advance orders with suppliers that the company would activate as soon as sales picked up. Suppliers were happy with the guaranteed production, and Honeywell stole a march on its competitors by filling customer orders faster than they could as the recovery began.
In the long run, those moves paid off for investors. During the recovery, from 2009 to 2012, they were rewarded with a 75% increase in Honeywell’s total stock value — which was 20 percentage points higher than the stock value increase of its nearest competitor.
Cote also built trust with the public by moving from a previous approach of litigating claims for asbestos and environmental damage to settling them. Honeywell began to issue payouts of $150 million for claims annually, making its liabilities more manageable and easing investors’ worries about future litigation costs. Cote also systematically went about cleaning up contaminated sites. That kind of attention to the interests of stakeholders gave people faith in the company’s good intentions.
Does the company use fair means to achieve its goals?
A company’s approach to dealing with customers, employees, investors, and society often comes under scrutiny. Companies that are trusted are given more leeway to create rules of engagement. Companies that aren’t face regulation. Just ask Facebook.
To build strong trust, firms need to understand — and measure up on — four types of fairness that organizational scholars have identified:
Procedural fairness: Whether good processes, based on accurate data, are used to make decisions and are applied consistently, and whether groups are given a voice in decisions affecting them. Distributive fairness: How resources (such as pay and promotions) or pain points (such as layoffs) are allocated. Interpersonal fairness: How well stakeholders are treated. Informational fairness: Whether communication is honest and clear. (In a 2011 study Jason Colquitt and Jessica Rodell found that this was the most important aspect for developing trust.)
The French tire maker Michelin learned how important it is to have fair processes in 1999, when it decided to cut 7,500 jobs after posting a 17% increase in profits. The outrage in response to that move was so great that eventually the French prime minister outlawed subsidies for any business making layoffs without proof of financial duress.
So in 2003, when Michelin realized it would have to continue restructuring to remain competitive, the company decided it needed to find a better way. It spent the next decade developing new approaches to managing change in its manufacturing facilities. The first strategy, called “ramp down and up,” focused on shifting resources among plants — closing some while expanding others — as new products were brought on line and market needs evolved. Under this strategy, Michelin made every effort to keep affected employees in jobs at Michelin. The company would help them relocate to factories that were growing and provided support for the transition, such as assistance finding housing and information on the schools in their new towns. When relocation was not an option, Michelin would provide employees training in skills needed for jobs that were available locally and offer them professional counseling and support groups.
Success with the ramp-down-and-up approach led Michelin’s leaders to later devise a bolder “turnaround” strategy, under which the management and employees of factories at risk of being shut down could propose detailed business plans to return them to profitability. If accepted, the plans would trigger investment from Michelin.
In carrying out these new approaches, the company demonstrated procedural, informational, interpersonal, and distributive fairness. In total it conducted 15 restructuring programs from 2003 to 2013, which included closing some plants while growing others and changing the mix of production capabilities among plants. But those reorganization efforts didn’t get a lot of flack from the media, because the public didn’t sound the alarm. In 2015, Michelin’s first plant turnaround won the support of 95% of the factory’s unionized workers. Michelin had demonstrated that it would use its power to treat employees fairly.
Does the company take responsibility for all its impact?
If stakeholders don’t believe a company will produce positive effects, they’ll limit its power. Part of the reason we have trouble forgiving Facebook is that its impact has been so enormous. The company might never have imagined that a hostile government would use its platform to influence an election or that a political consulting firm would harvest its users’ data without their consent, but that’s exactly what happened. And ultimately, what happens on Facebook’s platform is seen as the responsibility of Facebook.
Wanting to generate beneficial effects isn’t enough. Companies should carefully define the kind of impact they desire and then devise ways to measure and foster it. They must also have a plan for handling any unintended impact when it happens.
Pinterest, the social media platform, offers a good counterpoint to Facebook. Pinterest has very clearly defined the impact it wants to have on the world. Its mission statement reads: “Our mission is to help you discover and do what you love. That means showing you ideas that are relevant, interesting, and personal to you, and making sure you don’t see anything that’s inappropriate or spammy.”
In extensive community guidelines, Pinterest details what it doesn’t allow. For example, the company explains that it will “remove hate speech and discrimination, or groups and people that advocate either.” Pinterest then elaborates: “Hate speech includes serious attacks on people based on their race, ethnicity, national origin, religion, gender identity, sexual orientation, disability or medical condition. Also, please don’t target people based on their age, weight, immigration or veteran status.”
The company trains reviewers to screen the content on its site to enforce its guidelines. Every six months it updates the training and guidelines, even though the process is time-consuming and expensive.
In fall of 2018, when people in the anti-vaccine movement chose to use the platform to spread their message, Pinterest took a simple yet effective step: It banned vaccination searches. Now if you search for vaccinations on the platform, nothing shows up. A few months later, Pinterest blocked accounts promoting fake cancer treatments and other nonscientifically vetted medical goods.
The company continues to work with outside experts to improve its ability to stop disinformation on its site. Pinterest understands that, given its estimated 250 million users, its platform could be both used and abused, and has taken action to ensure that it doesn’t become a vehicle for causing harm.
How to Build and Rebuild Trust
Trust is less fragile than we think. Companies can be trusted in some ways but not others and still succeed. And trust can also be rebuilt.
Take the Japanese conglomerate Recruit Holdings. Its core businesses are advertising and online search and staffing, but its life-event platforms have transformed the way people find jobs, get married, and buy cars and houses, while its lifestyle platforms help customers book hair and nail appointments, make restaurant reservations, and take vacations.
From the beginning, Recruit designed its offerings around the principles of creating value and contributing to society. At the time it was launched, in 1960, large Japanese companies typically found new hires by hosting exams for job candidates at the top universities. Smaller companies that couldn’t afford to host exams and students at other universities were shut out of the process. So Recruit’s founder, Hiromasa Ezoe, started a magazine in which employers of all sizes could post job advertisements that could reach students at any university. Soon Recruit added such businesses as a magazine for selling secondhand cars and the first job-recruitment magazine aimed specifically at women.
However, in the 1980s, disaster struck the company. Ezoe was caught offering shares in a subsidiary to business, government, and media leaders before it went public. In all, 159 people were implicated in the scandal, and Japan’s prime minister and his entire cabinet were forced to resign. A few years later one of Recruit’s subsidiaries failed, saddling the company with annual interest payments that exceeded its annual income by ¥3 billion. Not long after that, Recruit suffered another major blow, when a main source of revenue, print advertising, was devastated by the rise of the internet.
This sequence of events would have easily felled another company, yet in 2018 Recruit had 40,000 employees and sales of $20 billion and operated in more than 60 countries. Today it’s an internet giant, running 264 magazines (most online), some 200 websites, and 350 mobile apps. Despite its setbacks, Recruit continued to attract customers, nurture the best efforts of committed employees, and reward investors, and regained the respect of society.
To many executives, what Recruit pulled off sounds impossible. That may be because they subscribe to five popular myths that prevent people from understanding how to build and rebuild trust. Let’s explore each of those myths and see how Recruit’s experiences prove them wrong.
Myth: Trust has no boundaries. Reality: Trust is limited.
Trust has three main components: the trusted party, the trusting party, and the action the trusted party is expected to perform. It’s built by creating real but narrowly defined relationships.
Recruit was respected for its competence and, in particular, the way it trained its advertising salespeople to actively observe customers and come up with ways to make their businesses more successful. In the wake of the scandal, Recruit kept focusing on delivering the same high level of service. Because the stock violation didn’t alter the company’s ability to meet customers’ expectations of competence, customers were willing to overlook it, and Recruit lost very few of them.
Myth: Trust is objective. Reality: Trust is subjective.
Trust is based on the judgment of people and groups, not on some universal code of good conduct. If trust were a universal standard, Recruit’s scandal would have led to its demise. However, even if society was outraged by the founder’s actions (employees recalled that their children were embarrassed by their parents’ jobs), customers still believed that Recruit’s employees had their interests at heart. In time customers’ trust led to increased profits, which made Recruit attractive to investors and society.
Myth: Trust is managed from the outside in — by controlling a firm’s external image. Reality: Trust is managed from the inside out — by running a good business.
All too often managers believe that improving a company’s reputation is the work of advertising and PR firms or ever-vigilant online image-protection platforms. In actuality, reputation is an outputthat results when a company uses fair processes to deal with stakeholders. Be trustworthy and you will be trusted. Recruit had not only a track record for delivering good products and good service but a salesforce that was willing to work to save the company. Why? Because it had created a culture and systems that engaged and motivated employees. Employees wanted to save Recruit because they could not imagine a better place to work.
Recruit was built on the belief that employees do their best work when they discover a passion and learn to rely on themselves to pursue it. The company’s motto is “Create your own opportunities and let the opportunities change you.” Managers ask employees “Why are you here?” to help them invent projects that link their passions to a contribution to society. Here’s how one employee in Recruit’s Lifestyle Company recently described his project: “I’m involved with the development of a smartphone app…which helps men monitor their fertility and lower the obstacles they face in trying to conceive.…It is a real challenge to envision products that do not yet exist and make them real, but I am confident that in some small way my creative abilities can provide a service that will help people.”
To ensure that all employees feel inspired by their work, Recruit makes them a unique offer: Once they reach the age of 35, they have the option of taking a $75,000 retirement bonus, providing they’ve been at Recruit at least six and a half years. The amount of the bonus increases as employees grow older. This offer is accompanied by career coaching that helps people make the right choice. People who have other dreams use the bonus to transition to different careers, making way for new employees with fresh perspectives on the needs of customers and society.
Myth: Companies are judged for their purpose. Reality: Companies are judged for their purpose and their impact.
Recruit’s purpose had always been to add value to society. However, that did not protect the company from fallout from the scandal. Recruit was forced to take responsibility for the impact it had on the country before it could regain people’s trust. Because its senior managers understood this, they disregarded PR’s dictate not to discuss the scandal and told employees they could too. Kazuhiro Fujihara, who was the head of sales at the time, explains: “I gathered my employees and told them we could criticize the company for what it had done. PR said we couldn’t criticize the company, but I ignored that.” Today, Recruit has a section on its website describing the scandal, what it learned, and the actions it took to ensure that it would not let something similar happen again. Recruit was well aware that even though the scandal was caused by its founder, Ezoe’s actions were still its responsibility.
Myth: Trust is fragile. Once lost, it can never be regained. Reality: Trust waxes and wanes.
More than three decades later, Recruit’s stock scandal is still infamous, but the company is thriving. The fall from grace was, Recruit says on its website, “an opportunity to transform ourselves into a new Recruit by encouraging each employee to confront the situation, think, make suggestions, and take action with a sense of ownership rather than waiting passively based on the assumption that the management team would rectify the situation. All proposals were welcomed, including those concerning new business undertakings and business improvements, provided they were forward looking.” That approach helped Recruit evolve and grow. It has expanded so much internationally, in fact, that 46% of revenues now come from outside Japan (up from 3.6% in 2011).
. . .
Now that we’ve broken down what trust is made of, let’s put it all together.
Building trust depends not on good PR but rather on clear purpose, smart strategy, and definitive action. It takes courage and common sense. It requires recognizing all the people and groups your company affects and focusing on serving their interests, not just your firm’s. It means being competent, playing fair, and most of all, acknowledging and, if necessary, remediating, all the impact your company has, whether intended or not.
It’s not always possible to make decisions that completely delight each of your stakeholder groups, but it is possible to make decisions that keep faith with and retain the trust they have in your company.
Sandra J. Sucher is a professor of management practice at Harvard Business School. She is co-author of The Power of Trust: How Companies Build It, Lose It, and Regain It (PublicAffairs 2021).
Shalene Gupta is a research associate at Harvard Business School. She is co-author of The Power of Trust: How Companies Build It, Lose It, and Regain It (PublicAffairs, 2021).
This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
Earlier this week, I was chatting with a policy professor in Washington, DC, who told me that students and colleagues alike are asking about GPT-4 and generative AI: What should they be reading? How much attention should they be paying?
She wanted to know if I had any suggestions, and asked what I thought all the new advances meant for lawmakers. I’ve spent a few days thinking, reading, and chatting with the experts about this, and my answer morphed into this newsletter. So here goes!
Though GPT-4 is the standard bearer, it’s just one of many high-profile generative AI releases in the past few months: Google,Nvidia, Adobe, and Baidu have all announced their own projects. In short, generative AI is the thing that everyone is talking about. And though the tech is not new, its policy implications are months if not years from being understood.
GPT-4, released by OpenAI last week, is a multimodal large language model that uses deep learning to predict words in a sentence. It generates remarkably fluent text, and it can respond to images as well as word-based prompts. For paying customers, GPT-4 will now power ChatGPT, which has already been incorporated into commercial applications.
The newest iteration has made a major splash, and Bill Gates called it “revolutionary” in a letter this week. However, OpenAI has also been criticized for a lack of transparency about how the model was trained and evaluated for bias.
Despite all the excitement, generative AI comes with significant risks. The models are trained on the toxic repository that is the internet, which means they often produce racist and sexist output. They also regularly make things up and state them with convincing confidence. That could be a nightmare from a misinformation standpoint and could make scams more persuasive and prolific.
Generative AI tools are also potential threats to people’s security and privacy, and they have little regard for copyright laws. Companies using generative AI that has stolen the work of others are already being sued.
Alex Engler, a fellow in governance studies at the Brookings Institution, has considered how policymakers should be thinking about this and sees two main types of risks: harms from malicious use and harms from commercial use. Malicious uses of the technology, like disinformation, automated hate speech, and scamming, “have a lot in common with content moderation,” Engler said in an email to me, “and the best way to tackle these risks is likely platform governance.” (If you want to learn more about this, I’d recommend listening to this week’s Sunday Show from Tech Policy Press, where Justin Hendrix, an editor and a lecturer on tech, media, and democracy, talks with a panel of experts about whether generative AI systems should be regulated similarly to search and recommendation algorithms. Hint: Section 230.)
Policy discussions about generative AI have so far focused on that second category: risks from commercial use of the technology, like coding or advertising. So far, the US government has taken small but notable actions, primarily through the Federal Trade Commission (FTC). The FTC issued a warning statement to companies last month urging them not to make claims about technical capabilities that they can’t substantiate, such as overstating what AI can do. This week, on its business blog, it used even stronger language about risks companies should consider when using generative AI.
“If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable—and often obvious—ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all,” the blog post reads.
The US Copyright Office also launched a new initiative intended to deal with the thorny policy questions around AI, attribution, and intellectual property.
The EU, meanwhile, is sticking true to its reputation as the world leader in tech policy. At the start of this year my colleague Melissa Heikkilä wrote about the EU’s efforts to try to pass the AI Act. It’s a set of rules that would prevent companies from releasing models into the wild without disclosing their inner workings, which is precisely what some critics are accusing OpenAI of with the GPT-4 release.
The EU intends to separate high-risk uses of AI, like hiring, legal, or financial applications, from lower-risk uses like video games and spam filters, and require more transparency around the more sensitive uses. OpenAI has acknowledged some of the concerns about the speed of adoption. In fact, its own CEO, Sam Altman, told ABC News he shares many of the same fears. However, the company is still not disclosing key data about GPT-4.
For policy folks in Washington, Brussels, London, and offices everywhere else in the world, it’s important to understand that generative AI is here to stay. Yes, there’s significant hype, but the recent advances in AI are as real and important as the risks that they pose.
What I am reading this week
Yesterday, the United States Congress called Shou Zi Chew, the CEO of TikTok, to a hearing about privacy and security concerns raised by the popular social media app. His appearance came after the Biden administration threatened a national ban if its parent company, ByteDance, didn’t sell off the majority of its shares.
There were lots of headlines, most using a temporal pun, and the hearing laid bare the depths of the new technological cold war between the US and China. For many watching, the hearing was both important and disappointing, with some legislators displaying poor technical understanding and hypocrisy about how Chinese companies handle privacy when American companies collect and trade data in much the same ways.
It also revealed how deeply American lawmakers distrust Chinese tech. Here are some of the spicier takes and helpful articles to get up to speed:
AI is able to persuade people to change their minds about hot-button political issues like an assault weapon ban and paid parental leave, according to a study by a team at Stanford’s Polarization and Social Change Lab. The researchers compared people’s political opinions on a topic before and after reading an AI-generated argument, and found that these arguments can be as effective as human-written ones in persuading the readers: “AI ranked consistently as more factual and logical, less angry, and less reliant upon storytelling as a persuasive technique.”
The teams point to concerns about the use of generative AI in a political context, such as in lobbying or online discourse. (For more on the use of generative AI in politics, do please read this recent piece by Nathan Sanders and Bruce Schneier.)
“Amongst the novel objects that attracted my attention during my stay in the United States, nothing nothing struck me more forcibly than the general equality of conditions. I readily discovered the prodigious influence which this primarily fact exercises on the whole course of society, by giving a certain direction to public opinion, and a certain tenor to the laws; while imparting new maximums to the governing powers, and peculiar habits to the governed. I speedily perceived that the influence of this fact extends far beyond the political character and laws of the country, and that it has no less empire over civil society than over the Government; it creates opinions, engenders sentiments, suggests the ordinary practices of life, and modifies whatever it does not produce. The more I advanced in the study of American society, the more I perceived that the equality of conditions is the fundamental fact from which all others seem to be derived, and the central point at which all my observations constantly terminated.”
WASHINGTON — Imagine a militarized version of ChatGPT, trained on secret intelligence. Instead of painstakingly piecing together scattered database entries, intercepted transmissions and news reports, an analyst types in a quick query in plain English and get back, in seconds, a concise summary — a prediction of hostile action, for example, or a profile of a terrorist.
But is that output true? With today’s technology, you can’t count on it, at all.
That’s the potential and the peril of “generative” AI, which can create entirely new text, code or images rather than just categorizing and highlighting existing ones. Agencies like the CIA and State Departmenthave already expressed interest. But for now, at least, generative AI has a fatal flaw: It makes stuff up.
“I’m excited about the potential,” said Lt. Gen. (ret.) Jack Shanahan, founding director of the Pentagon’s Joint Artificial Intelligence Center (JAIC) from 2018 to 2020. “I play around with Bing AI, I use ChatGPT pretty regularly — [but] there is no intelligence analyst right now that would use these systems in any way other than with a hefty grain of salt.”
“This idea of hallucinations is a major problem,” he told Breaking Defense, using the term of art for AI answers with no foundation in reality. “It is a showstopper for intelligence.”
Shanahan’s successor at JAIC, recently retired Marine Corps Lt. Gen. Michael Groen, agreed. “We can experiment with it, [but] practically it’s still years away,” he said.
Instead, Shahanan and Groen told Breaking Defense that, at this point, the Pentagon should swiftly start experimenting with generative AI — with abundant caution and careful training for would-be users — with an eye to seriously using the systems when and if the hallucination problem can be fixed.
To see the current issues with popular generative AI models, just ask the AI about Shanahan and Groen themselves, public figures mentionedin Wikipedia and manynews reports.
Then-Lt. Gen. John “Jack” Shanahan
On its first attempt, ChatGPT 3.5 correctly identified Shanahan as a retired Air Force officer, AI expert, and former director of both the groundbreaking Project Maven and the JAIC. But it also said he was a graduate of the Air Force Academy, a fighter pilot, and an advisor to AI startup DeepMind — none of which is true. And almost every date in the AI-generated bio was off, some by nearly a decade.
“Wow,” Shanahan said of the bot’s output. “Many things are not only wrong, but seemingly out of left field… I’ve never had any association with DeepMind.”
What about the upgraded ChatGPT 4.0? “Almost as bad!” he said. This time, in addition to a different set of wrong jobs and fake dates, he was given two children that did not exist. Weirder yet, when you enter the same question into either version a second time, it generates a new and different set of errors.
Nor is the problem unique to ChatGPT. Google’s Bard AI, built on a different AI engine, did somewhat better, but it still said Shanahan was a pilot and made up assignments he never had.
“The irony is that it could pull my official AF bio and get everything right the first time,” Shanahan said.
And Groen? “The answers about my views on AI are pretty good,” the retired Marine told Breaking Defense after reviewing ChatGPT 3.5’s first effort, which emphasized his enthusiasm for military AI tempered by stringent concern for ethics. “I suspect that is because there are not too many people with my name that have publicly articulated their thoughts on this topic.”
On the other hand, Groen went on, “many of the facts of my biography are incorrect, [e.g.] it got place of birth, college attended, year entered service wrong. It also struggled with units I have commanded or was a part of.”
How could generative AI get the big picture right but mess up so many details? It’s not that the data isn’t out there: As with Shanahan, Groen’s official military bio is online.
But so is a lot of other information, he pointed out. “I suspect that there are so many ‘Michaels’ and ‘Marine Michaels’ on the global internet that the ‘pattern’ that emerged contains elements that are credible, but mostly incorrect,” Groen said.
Then-Lt. Gen. Michael Groen, director of the Joint AI Center, briefs reporters in 2020. (screenshot of DoD video)
This tendency to conflate general patterns and specific facts might explain the AIs’ insistence that Shanahan is an Air Force Academy grad and fighter pilot. Neither of those things is true of him as an individual, but they are commonly mentioned attributes of Air Force officers as a group. It’s not true, but it’s plausible — and this kind of AI doesn’t remember a database of specific facts, only a set of statistical correlations between different words.
“The system does not know facts,” Shanahan said. “It really is a sophisticated word predictor.”
The companies behind the tech each acknowledge and warn that their AI apps may produce incorrect information and urge user caution. The missteps in AI chatbots are not dissimilar to those of generative AI artists, like Stable Diffusion, which produced the image of a mutant Abrams below.
Output from a generative AI, Stable Diffusion, when prompted by Breaking Defense with “M1 Abrams tank.”
What Goes Wrong
In the cutting-edge Large Language Models that drive ChatGPT, Bard, and their ilk, “each time it generates a new word, it is assigning a sort of likelihood score to every single word that it knows,” explained Micah Musser, a research analyst at Georgetown University’s Center for Security and Emerging Technology. “Then, from that probability distribution, it will select — somewhat at random – one of the more likely words.”
That’s why asking the same question more than once gets subtly or even starkly different answers every time. And while training the AI on larger datasets can help, Musser told Breaking Defense, “even if it does have sufficient data, it does have sufficient context, if you ask a hyper-specific question and it hasn’t memorized the [specific] example, it may just make something up.” Hence the plausible but invented dates of birth, graduation, and so on for Shanahan and Groen.
Now, it’s understandable when human brains misremember facts and even invent details that don’t exist. But surely a machine can record the data perfectly?
Not these machines, said Dimitry Fisher, head of data science and applied research at AI company Aicadium. “They don’t have memory in the same sense that we do,” he told Breaking Defense. “They cannot quote sources… They cannot show what their output has been inspired by.”
Ironically, Fisher told Breaking Defense, earlier attempts to teach AIs natural language did have distinct mechanisms for memorizing specific facts and inferring general patterns, much like the human brains that inspired them. But such software ran too slowly on any practical hardware to be of much use, he said. So instead the industry shifted to a type of AI called a transformer — that’s the “T” in “ChatGPT” — which only encodes the probable correlations between words or other data points.
“It just predicts the most likely next word, one word at a time,” Fisher said. “You can’t have language generation on the industrial scale without having to take this architectural shortcut, but this is where it comes back and bites you.”
These issues should be fixable, Fisher said. “There are many good ideas of how to try to and solve them – but that’ll take probably a few years.”
Shanahan, likewise, was guardedly optimistic, if only because of the financial incentives to get it right.
“These flaws are so serious that the big companies are going to spend a lot of money and a lot of time trying to fix things,” he said. “How well those fixes will work remains the unanswered question.”
The Combined Air Operations Center (CAOC) at Al Udeid Air Base, Qatar, (US Air Force photo by Staff Sgt. Alexander W. Riedel)
If It Works…
If generative AI can be made reliable — and that’s a significant if — the applications for the Pentagon, as for the private sector, are extensive, Groen and Shanahan agreed.
“Probably the places that make the most sense in the near term… are those back-office business from personnel management to budgeting to logistics,” Shanahan said. But in longer term, “there is an imperative to use them to help deal with … the entire intelligence cycle.”
So while the hallucinations have to be fixed, Shanahan said, “what I’m more worried about, in the immediate term, is just the fact that I don’t see a whole lot of action in the government about using these systems.” (He did note that the Pentagon Chief Digital & AI Office, which absorbed the JAIC, has announced an upcoming conferenceon generative AI: “That’s good.”). Instead of waiting for others to perfect the algorithms, he said, “I’d rather get them in the hands of users, put some boundaries in place about how they can be used… and then focus really heavy on the education, the training, and the feedback” from users on how they can be improved.
Groen was likewise skeptical about the near term. “We don’t want to be in the cutting edge here in generative AI,” he said, saying near-term implementation should focus on tech “that we know and that we trust and that we understand.” But he was even more enthused about the long term than Shanahan.
“What’s different with ChatGPT, suddenly you have this interface, [where] using the English language, you can ask it questions,” Groen said. “It democratizes AI [for] large communities of people.”
That’s transformative for the vast majority of users who lack the technical training to translate their queries into specialized search terms. What’s more, because the AI can suck up written information about any subject, it can make connections across different disciplines in a way a more specialized AI cannot.
“What makes generative AI special is it can understand multiple narrow spaces and start to make integrative conclusions across those,” Groen said. “It’s… able to bridge.”
But first, Groen said, you want the strong foundations to build the bridge across, like pilings in the river. So while experimenting with ChatGPT and co., Groen said, he would put the near-term emphasis on training and perfecting traditional, specialist AIs, then layer generative AI over top of them when it’s ready.
“There are so many places today in the department where it’s screaming for narrow AI solutions to logistics inventories or distribution optimizers, or threat identification … like office automation, not like killer robots,” Groen said. “Getting all these narrow AIs in place and building these data environments actually really prepares us for — at some point — integrating generative AI.”
Quantum sensors can modernize the U.S. electrical grid through on-site technology with the long-term aim of supporting climate resilience.
Quantum sensing research and development is one of the Department of Energy’s priorities, according to an agency official, as the devices show promise for electrical grid efficiency and sustainability efforts.
Rima Oueid, a senior commercialization executive in Energy’s Office of Technology Transitions, discussed with Nextgov the agency’s larger goals surrounding implementing quantum information science and technology, or QIST, in existing infrastructure, emphasizing the myriad benefits of quantum sensor application.
“We are looking at quantum sensors that can be utilized for monitoring the grid, for anomaly detection and making the grid more resilient to climate change,” Oueid said.
She specified that quantum sensors––a quantum information technology currently used in Magnetic Resonance Imaging machines and atomic clocks––can report more precise data upon which critical infrastructure relies.
Critical infrastructure, including the U.S. electrical grid, uses global positioning technologies to send positioning, navigation and timing information in order to operate. Oueid said that quantum sensors have the power to report PNT data directly from the electrical grid rather than from satellite-based GPS sources.
“What we’re realizing now is that there are different types of quantum sensors that we can also now use for timing that could be deployed directly on the grid…as opposed to depending on GPS,” she said. “We’re hoping that we get to a place where we don’t need the satellite communication, that we would have these quantum sensors distributed.”
If quantum sensors supplant GPS PNT information, they could enable GPS-denied areas access to the grid. Oueid said this will be critical as the electrical grid takes on more energy generation and storage systems, but that infrastructure security stands to benefit as well, since satellite interference wouldn’t be a concern.
“If we have quantum sensors instead distributed…where they need to be, then it’s harder to disrupt the system,” she said.
Eventually, the goal is to fully incorporate distributed energy resources—namely wind, solar and electric vehicles—into the country’s central electrical grid to act as energy assets. Oueid conceded that certain market forces will need to align with Energy’s efforts to spur widespread electric vehicle adoption and integration, but quantum sensors can use PNT data to help signal to vehicles when renewables are readily available on the grid to charge their batteries.
Past optimizing the nation’s electrical grid for better usage of renewable energy, quantum sensors are currently being studied for their potential to track climate change with more precise algorithms, as well as conducting subsurface level exploration to find potential underground carbon repositories, in an effort to reduce fracking activity.
“The possibilities are amazing,” Oueid said. “There’s a lot of different use cases that could help us make a system smarter and more efficient to help reduce climate change concerns.”
Coalition for Health AI Details Framework Focused on Care Impact, Ethics, and Equity of Health AI Tools
Bedford, Mass., April 4, 2023—The Coalition for Health AI (CHAI) released its highly anticipated “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare” (Blueprint). The Blueprint addresses the quickly evolving landscape of health AI tools by outlining specific recommendations to increase trustworthiness within the healthcare community, ensure high-quality care, and meet healthcare needs. The 24-page guide reflects a unified effort among subject matter experts from leading academic medical centers and the healthcare, up technology, and other industry sectors, who collaborated under the observation of several federal agencies over the past year.
“Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians,” said Dr. Brian Anderson, a co-founder of the coalition and chief digital health physician at MITRE. “The CHAI Blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care.”
“The successful implementation and impact of AI technology in healthcare hinges on our commitment to responsible development and deployment,” said Eric Horvitz, chief scientific officer at Microsoft and CHAI co-founder. “I am truly inspired by the incredible dedication, intelligence, and teamwork that led to the creation of the Blueprint.”
CHAI, NATIONAL ACADEMY OF MEDICINE COLLABORATE ON CODE OF CONDUCT
The National Academy of Medicine’s (NAM’s) AI Code of Conduct effort is designed to align health, healthcare, and biomedical science around a broadly adopted code of conduct in AI to ensure responsible AI that assures equitable benefit for all. The NAM effort will inform CHAI’s future efforts, which will provide robust best-practice technical guidance, including assurance labs and implementation guides to enable clinical systems to apply the Code of Conduct.
CHAI’s technical focus will help to inform and clarify areas that will need to be addressed in NAM’s Code of Conduct. The work and final deliverables of these projects are mutually reinforcing and coordinated to establish a code of conduct and technical framework for health AI assurance.
“We have a rare window of opportunity in this early phase of AI development and deployment to act in harmony—honoring, reinforcing, and aligning our efforts nationwide to assure responsible AI. The challenge is so formidable and the potential so unprecedented. Nothing less will do,” said Laura L. Adams, senior advisor, National Academy of Medicine.
BLUEPRINT FOR AI BILL OF RIGHTS
The Blueprint builds upon the White House OSTP “Blueprint for an AI Bill of Rights” and the “AI Risk Management Framework (AI RMF 1.0)” from the U.S. Department of Commerce’s National Institute of Standards and Technology. OSTP acts as a federal observer to CHAI, as do the Agency for Healthcare Research and Quality, Centers for Medicare & Medicaid Services, U.S. Food and Drug Administration, Office of the National Coordinator for Health Information Technology, and National Institutes of Health.
“The needs of all patients must be foremost in this effort. In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology. Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The Blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia, and industry,” said Dr. John Halamka, president, Mayo Clinic Platform, and a co-founder of the coalition.
“The CHAI Blueprint is the result of the kind of collaborative approach that’s essential for achieving diverse perspectives on issues affecting AI in medicine,” said Michael Pencina, Ph.D., a co-founder of the coalition and director of Duke AI Health. “And given our rapidly evolving understanding of the significant impacts of AI on health, health delivery, and equity, the fact that the Blueprint is designed to be a flexible ‘living document’ will enable us to maintain a continuous focus on these critically important dimensions of algorithmic healthcare.”
The Coalition for Health AI is a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science. Coalition members have come together to harmonize standards and reporting for health AI, and to educate end-users on how to evaluate these technologies to drive their adoption. Its mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase trustworthiness amongst users, and meet healthcare needs. Learn more at coalitionforhealthai.org.
Russian intelligence services, together with a Moscow-based IT company, are planning worldwide hacking operations that will also enable attacks on critical infrastructure facilities.
The release of thousands of pages of confidential documents has exposed Russian military and intelligence agencies’ grand plans for using their cyberwar capabilities in disinformation campaigns, hacking operations, critical infrastructure disruption, and control of the Internet.
The papers were leaked from the Russian contractor NTC Vulkan and show how Russian intelligence agencies use private companies to plan and execute global hacking operations. They include project plans, software descriptions, instructions, internal emails, and transfer documents from the company.
The takeover of railroad networks and power plants are also part of a training seminar held by Vulkan to train hackers.
The leak also exposes the company’s close links to the FSB, Russia’s domestic spy agency, the GOU and GRU, the respective operational and intelligence divisions of the armed forces, and the SVR, Russia’s foreign intelligence organization.
The documents, which were leaked by an unnamed source to a German reporter working for the Süddeutsche Zeitung at the start of Russia’s invasion of Ukraine, have since been analyzed by global media outlets including The Washington Post and German media outlets Paper Trail Media and Der Spiegel.
According to the Spiegel report (in German), Vulkan has developed tools that allow state hackers to efficiently prepare cyberattacks, filter Internet traffic, and spread propaganda and disinformation on a massive scale.
The Spiegel report notes that analysts from Google reportedly discovered a connection between Vulkan and the hacker group Cozy Bear years ago; the group has successfully penetrated systems of the US Department of Defense in the past.
Amezit, Skan-V Programs Revealed
One offensive cyber program described in the documents is internally codenamed “Amezit.”
The wide-ranging platform is designed to enable attacks on critical infrastructure facilities in addition to total information control over specific areas.
The program’s goals include using special software to derail trains or paralyze airport computers, but it was not clear from the materials whether the program is currently being used against Ukraine.
Another project, called “Skan-V,” is supposed to automate cyberattacks and make them much easier to plan.
Whether and where the programs were used cannot be traced, but the documents prove that the programs were ordered, tested, and paid for.
“People should know the dangers this poses,” shared the anonymous source who leaked the docs to the media. The Russian invasion of Ukraine had motivated the source to make the documents public.
As the Sandworm Turns
A trail also leads to the state hacker group Sandworm, one of the most dangerous advanced persistent threats (APTs) in the world, responsible for some of the most serious cyberattacks of recent years. For instance, the threat actor has been targeting the Ukrainian capital since as far back as December 2016 when it used the malware tool Industroyer to cause a temporary power outage in Kyiv.
Until now, it was not known that the group used tools from private companies.
Sandworm has previously been linked to GRU.
Since the start of the war, at least five Russian, state-sponsored or cybercriminal groups — including Gamaredon, Sandworm, and Fancy Bear — have targeted Ukrainian government agencies and private companies in dozens of operations that aimed to disrupt services or steal sensitive information.
The recommendations focus on executable actions for the Office of Management and Budget, agencies, Congress and industry
MITRE made 10 recommendations this week to help the federal government modernize legacy systems, citing that “significant numbers of critical federal information technology systems that provide vital support to agencies’ missions are operating with known security vulnerabilities and unsupported hardware and software.”
According to the March 29recommendations, the government must move away from legacy systems to fully leverage technology and fulfill critical missions. The recommendations focus on execution and were directed at the Office of Management and Budget, agencies, Congress and industry.
For example, MITRE suggests OMB provide guidance on legacy systems inventories and IT modernization plans in addition to progress transparency mechanisms. Congress should introduce legislation to reduce legacy IT and should make adjustments to the Federal Information Technology Acquisition Reform Act—or FITARA—scorecard by adding an IT Modernization Planning and Delivering category.
Meanwhile, agencies should develop inventories, modernization plans and budgets to support this, in addition to progress reports that detail acquisition and legacy system retirement. Lastly, industry should partner with the government to further facilitate these processes.
“If you look at our recommendations, there’s a premise here that it starts with putting in place some really good policies both in the executive branch and on the legislative side of the house,” said Dave Powner, executive director of MITRE’s Center for Data-Driven Policy and one of the authors of the recommendations. “So you can think of it as starting with OMB really requiring comprehensive modernization plans that focus on decommissioning some of these old systems and [then] that is backed with sound legislation. I think if you start with those policies in place from both sides—executive and legislative branches—that would be a good start to ensure that we’re all on the same page and marching to the same beat here.”
However, co-author Dr. Nitin Naik, a technical fellow at MITRE, noted that while policies and funding are necessary, there are other critical components to this process.
“You want to make sure that you have good implementation plans, and you need to have industry partnership because we don’t want the industry to continue to profess that the old technology can continue to meet the needs,” Naik said. “We want them to be an active participant to say, ‘Okay, let’s try to see how we can take this and bring it to the new industry standard.’”
Noting there have been several other efforts to modernize legacy systems—from previous bills to cyber budgets and the National Cybersecurity Strategy—Powner explained, “our recommendations really get at the execution of those plans.”
“And how do we execute those plans—having transparency mechanisms that you report progress, having industry as a key partner, thinking differently about the digital services team. And could agencies operate within their organizations, but also go to OMB or [a] central organization out of the White House for help?” Powner said. “And this is a collective game here—it’s the policymakers, the agencies and industry, all kind of collectively working together.”
For example, one recommendation suggests that agencies partner with industry, labs and federally-funded research and development centers to help with innovation and take advantage of new technologies, according to Naik.
Another recommendation for agencies focuses on using technology like artificial intelligence and automation to improve the modernization process.
And while many agencies have modernization plans in place, there can be several challenges.
“The question is, are [agencies’] IT modernization plans really getting at these mission critical legacy applications?” Powner said.
“All the systems we are talking about are delivering services 24/7, 365, so, you cannot have a stoppage of work in any way that will bring tremendous problems,” Naik said. “The question is—what is a good strategy to sort of start reengineering this, testing it out on the side, and then slowly one-by-one, migrating things over while it is an interconnected system.”
The Food and Drug Administration announced March 29 that it will begin to “refuse to accept” medical devices and related systems over cybersecurity reasons beginning Oct. 1. All new device submissions must include detailed cybersecurity plans beginning March 29.
As such, device manufacturers will need to submit plans to monitor, identify and address in a “reasonable timeframe” any determined post-market cybersecurity vulnerabilities and exploits, including coordinated vulnerability disclosures and plans.
Developers must now design and maintain procedures able to show, with reasonable assurance, “that the device and related systems are cybersecure” and create post-market updates and patches to the device and connected systems that address “on a reasonably justified regular cycle, known unacceptable vulnerabilities,” according to the guidance.
If discovered out-of-cycle, the manufacturer must also make public “critical vulnerabilities that could cause uncontrolled risks,” as soon as possible.
Submissions will also need to include a software bill of materials, which must contain all commercial, open-source, and off-the-shelf software components, while complying with other FDA requirements “to demonstrate reasonable assurance that the device and related systems are cybersecure.”
These plans should come as no surprise to device manufacturers, as they were included in the new authorities granted by the Consolidated Appropriations Act of 2023, which was signed into law on Dec. 29.
The law created “long desired FDA authorities” that were left out of previous resolutions and includes requirements for premarket submissions proposed by the Protecting and Transforming Cyber Health Care (PATCH) Act.
The December inclusion yielded overwhelming support from healthcare stakeholders, who’ve long requested federal support to curtail systemic challenges with securing medical devices. Healthcare delivery organizations have long borne the onus of securing the vast, complex device ecosystem, and even the most equipped health systems do not fully meet the task.
The December Omnibus included statements that required the FDA to take the actions announced March 29 within 90 days of the law’s passage. The final guidance titled “Cybersecurity in Medical Devices: Refuse to Accept Policy for Cyber Devices and Related Systems,” includes all requirements for new submissions.
The new cybersecurity requirements don’t apply to applications or submissions submitted to the FDA before March 29. And the “refuse to accept” decisions for premarket submissions based solely on cyber reasons will not go into effect until Oct. 1.
Rather, the FDA says it intends to “work collaboratively with sponsors of such premarket submissions as part of the interactive and/or deficiency review process.” The agency expects that cyber device sponsors “will have had sufficient time to prepare premarket submissions” to include the cyber requirements contained in the finalized guidance.
“And FDA may refuse to accept premarket submissions that do not,” according to its notice. A a medical device is considered a “cyber device” if it includes “software validated, installed, or authorized by the sponsor,” can connect to the internet, and contains any tech characteristics validated, installed, or authorized that could be vulnerable to cybersecurity threats.
The guidance did not go through the typical public comment period, as “prior public participation is not feasible or appropriate.” Officials added that “although this policy is being implemented immediately without prior comment, FDA will consider all comments received and revise the guidance document as appropriate.”
The voice of healthcare cybersecurity and policy for SC Media, CyberRisk Alliance, driving industry-specific coverage of what matters most to healthcare and continuing to build relationships with industry stakeholders.