Decisions about powerful automation tools should not be left to a handful of entrepreneurs and engineers, MIT researchers argue. Here’s how to reclaim control.
In 18th century Britain, technical improvements in textile production generated great wealth for factory owners but created horrible working and living conditions for textile workers, who did not see their incomes rise for almost 100 years.
Today, artificial intelligence and other digital technologies mesmerize the business elite while threatening to undermine jobs and democracy through excessive automation, massive data collection, and intrusive surveillance.
In their new book, “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity,” MIT economists Daron Acemoglu and Simon Johnson decry the economic and social damage caused by the concentrated power of business and show how the tremendous computing advances of the past half century can become empowering and democratizing tools.
In this excerpt, the authors call for the development of a powerful new narrative about shared prosperity and offer four ways to rechart the course of technology so it complements human capabilities.
Our current problems are rooted in the enormous economic, political, and social power of corporations, especially in the tech industry. The concentrated power of business undercuts shared prosperity because it limits the sharing of gains from technological change. But its most pernicious impact is via the direction of technology, which is moving excessively toward automation, surveillance, data collection, and advertising.
To regain shared prosperity, we must redirect technology, and this means activating a version of the same approach that worked more than a century ago for the Progressives.
This can start only by altering the narrative and the norms.
The necessary steps are truly fundamental. Society and its powerful gatekeepers need to stop being mesmerized by tech billionaires and their agenda. Debates on new technology ought to center not just on the brilliance of new products and algorithms but also on whether they are working for the people or against the people. Whether digital technologies should be used for automating work and empowering large companies and nondemocratic governments must not be the sole decision of a handful of entrepreneurs and engineers. One does not need to be an AI expert to have a say about the direction of progress and the future of our society forged by these technologies. One does not need to be a tech investor or venture capitalist to hold tech entrepreneurs and engineers accountable for what their inventions do.
Choices over the direction of technology should be part of the criteria that investors use for evaluating companies and their effects. Large investors can demand transparency on whether new technologies will automate work or create new tasks, whether they will monitor or empower workers, and how they will affect political discourse and other social outcomes.
These are not decisions investors should care about only because of the profits they generate. A two-tiered society with a small elite and a dwindling middle class is not a foundation for prosperity or democracy.
Nevertheless, it is possible to make digital technologies useful to humans and boost productivity so that investing in technologies that help humans can also be good business.
As with the Progressive Era reforms and redirection in the energy sector, a new narrative is critical for building countervailing powers in the digital age. Such a narrative and public pressure can trigger more responsible behavior among some decision makers.
For example, managers with business-school educations tend to reduce wages and cut labor costs, presumably because of the lingering influence of the Friedman doctrine — the idea that the only purpose and responsibility of business is to make profits.
A powerful new narrative about shared prosperity can be a counterweight, influencing the priorities of some managers and even swaying the prevailing paradigm in business schools. Equally, it can help reshape the thinking of tens of thousands of bright young people wishing to work in the tech sector — even if it is unlikely to have much impact on tech tycoons.
More fundamentally, these efforts must formulate and support specific policies to rechart the course of technology. Digital technologies can complement humans by:
Improving the productivity of workers in their current jobs.
Creating new tasks with the help of machine intelligence augmenting human capabilities.
Providing better, more usable information for human decision-making.
Building new platforms that bring together people with different skills and needs.
For example, digital and AI technologies can increase effectiveness of classroom instruction by providing new tools and better information to teachers. They can enable personalized instruction by identifying in real time areas of difficulty or strength for each student, thus generating a plethora of new, productive tasks for teachers. They can also build platforms that bring teachers and teaching resources more effectively together. Similar avenues are open in health care, entertainment, and production work.
An approach that complements workers, rather than sidelining and attempting to eliminate them, is more likely when diverse human skills, based on the situational and social aspects of human cognition, are recognized. Yet such diverse objectives for technological change necessitate a plurality of innovation strategies, and they become less likely to be realized when a few tech firms dominate the future of technology.
Diverse innovation strategies are also important because automation is not harmful in and of itself. Technologies that replace tasks performed by people with machines and algorithms are as old as industry itself, and they will continue to be part of our future. Similarly, data collection is not bad per se, but it becomes inconsistent both with shared prosperity and democratic governance when it is centralized in the hands of unaccountable companies and governments that use these data to disempower people.
The problem is an unbalanced portfolio of innovations that excessively prioritize automation and surveillance, failing to create new tasks and opportunities for workers. Redirecting technology need not involve the blocking of automation or banning data collection; it can instead encourage the development of technologies that complement and help human capabilities.
Society and government must work together to achieve this objective. Pressure from civil society, as in the case of successful major reforms of the past, is key. Government regulation and incentives are critical too, as they were in the case of energy.
However, the government cannot be the nerve center of innovation, and bureaucrats are not going to design algorithms or come up with new products. What is needed is the right institutional framework and incentives shaped by government policies, bolstered by a constructive narrative, to induce the private sector to move away from excessive automation and surveillance and toward more worker-friendly technologies.
The U.S. Army is overhauling how it develops and adopts software, the lifeblood of high-tech weaponry, vehicles and battlefield information-sharing.
The service on March 9 rolled out a policy, dubbed Enabling Modern Software Development and Acquisition Practices, enshrining the revisions. Officials said the measure brings them closer to private-sector expectations, making business simpler and more inclusive.
“We thought this was important to do this now, and issue this policy now, because of how critical software is to the fight right now,” Margaret Boatner, the deputy assistant secretary of the Army for strategy and acquisition reform, told reporters at the Pentagon. “More than ever before, software is actually a national-security imperative.”
Consequences of the policy include: changing the way requirements are written, favoring high-level needs statements and concision over hyper-specific directions; employing alternative acquisition and contracting strategies; reducing duplicative tests and streamlining cybersecurity processes; embracing a sustainment model that recognizes programs can and should be updated; and establishing expert cohorts, such as the prospective Digital Capabilities Contracting Center of Excellence at Aberdeen Proving Ground, Maryland.
While the policy is effective immediately, the different reforms will take different amounts of time to be realized. The contacting center, for example, has several months to get up and running. No additional appropriations are needed to make the transitions, according to Boatner.
Every year, OpenAI’s employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. It’s mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.
In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabet’s DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.
Above all, it is lionized for its mission. Its goal is to be the first to create AGI—a machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.
The implication is that AGI could easily run amok if the technology’s development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.
OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to “build value for everyone rather than shareholders.” Its charter—a document so sacred that employees’ pay is tied to how well they adhere to it—further declares that OpenAI’s “primary fiduciary duty is to humanity.” Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.
But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.
Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation “Can machines think?” Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.
“It is one of the most fundamental questions of all intellectual history, right?” says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. “It’s like, do we understand the origin of the universe? Do we understand matter?”
The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. It’s not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.
But the resounding consensus within the field is that such advanced capabilities would take decades, even centuries—if indeed it’s possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late ’80s and early ’90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. “The field felt like a backwater,” says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.
Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasn’t the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.
The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped downas president of startup accelerator Y Combinator to become OpenAI’s CEO.)
But more than anything, OpenAI’s nonprofit status made a statement. “It’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest,” the announcement said. “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.” Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.
In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. “It was a beacon of hope,” says Chip Huyen, a machine learning expert who has closely followed the lab’s journey.
At the intersection of 18th and Folsom Streets in San Francisco, OpenAI’s office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters “PIONEER BUILDING”—the remnants of its bygone owner, the Pioneer Truck Factory—wrap around the corner in faded red paint.
Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space I’m restricted to during my visit. I’m forbidden to visit the second and third floors, which house everyone’s desks, several robots, and pretty much everything interesting. When it’s time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.
On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. “We’ve never given someone so much access before,” he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.
Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a “focused, quiet childhood.” He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.
Brockman takes me to lunch to remove me from the office during an all-company meeting. In the café across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. It’s easy to appreciate his charisma as a leader. Recounting memorable passages from the books he’s read, he zeroes in on the Valley’s favorite narrative, America’s race to the moon. (“One story I really love is the story of the janitor,” he says, referencing a famous yet probably apocryphal tale. “Kennedy goes up to him and asks him, ‘What are you doing?’ and he says, ‘Oh, I’m helping put a man on the moon!’”) There’s also the transcontinental railroad (“It was actually the last megaproject done entirely by hand … a project of immense scale that was totally risky”) and Thomas Edison’s incandescent lightbulb (“A committee of distinguished experts said ‘It’s never gonna work,’ and one year later he shipped”).
Brockman is aware of the gamble OpenAI has taken on—and aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. It’s the price of daring greatly.
Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was small—formed through a tight web of connections—and management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.
Musk played no small part in building a collective mythology. “The way he presented it to me was ‘Look, I get it. AGI might be far away, but what if it’s not?’” recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. “‘What if it’s even just a 1% or 0.1% chance that it’s happening in the next five to 10 years? Shouldn’t we think about it very carefully?’ That resonated with me,” he says.
But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasn’t clear the team itself knew either. “Our goal right now … is to do the best thing there is to do,” Brockman said. “It’s a little vague.”
Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAI’s members. After two years, at Brockman’s request, Daniela joined too. “Imagine—we started with nothing,” Brockman says. “We just had this ideal that we wanted AGI to go well.”
Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the company’s existence.
By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that “in order to stay relevant,” Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass money—while somehow also staying true to the mission.
Unbeknownst to the public—and most employees—it was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the lab’s core values but subtly shifted the language to reflect the new reality. Alongside its commitment to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” it also stressed the need for resources. “We anticipate needing to marshal substantial resources to fulfill our mission,” it said, “but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”
“We spent a long time internally iterating with employees to get the whole company bought into a set of principles,” Brockman says. “Things that had to stay invariant even if we changed our structure.”
That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a “capped profit” arm—a for-profit with a 100-fold limit on investors’ returns, albeit overseen by a board that’s part of a nonprofit entity. Shortly after, it announcedMicrosoft’s billion-dollar investment (though it didn’t reveal that this was split between cash and credits to Azure, Microsoft’s cloud computing platform).
Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: “Early investors in Google have received a roughly 20x return on their capital,” they wrote. “Your bet is that you’ll have a corporate structure which returns orders of magnitude more than Google … but you don’t want to ‘unduly concentrate power’? How will this work? What exactly is power, if not the concentration of resources?”
The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. “Can I trust OpenAI?” one question asked. “Yes,” began the answer, followed by a paragraph of explanation.
The charter is the backbone of OpenAI. It serves as the springboard for all the lab’s strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the company’s existence. (“By the way,” he clarifies halfway through one recitation, “I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. It’s not like I was reading this before the meeting.”)
How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? “As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that aren’t imaginable today.” How will you structure yourself to evenly distribute AGI? “I think a utility is the best analogy for the vision that we have. But again, it’s all subject to the charter.” How do you compete to reach AGI first without compromising safety? “I think there is absolutely this important balancing act, and our best shot at that is what’s in the charter.”
For Brockman, rigid adherence to the document is what makes OpenAI’s structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesn’t mind—in fact, he agrees with the mentality. It’s the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.
In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of “effective altruism.” They crack jokes using machine-learning terminology to describe their lives: “What is your life a function of?” “What are you optimizing for?” “Everything is basically a minmax function.” To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)
But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employee’s absorption of the mission. Alongside columns like “engineering expertise” and “research direction” in a spreadsheet tab titled “Unified Technical Ladder,” the last column outlines the culture-related expectations for every level. Level 3: “You understand and internalize the OpenAI charter.” Level 5: “You ensure all projects you and your team-mates work on are consistent with the charter.” Level 7: “You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.”
The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.
But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponizedto produce disinformation at immense scale.
The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? “It seemed like OpenAI was trying to capitalize off of panic around AI,” says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.
By May, OpenAI had revised its stance and announced plans for a “staged release.” Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithm’s potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, “no strong evidence of misuse so far.”
Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadn’t been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that “safety and security concerns” would gradually oblige the lab to “reduce our traditional publishing in the future.”
This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. “I think that is definitely part of the success-story framing,” said Miles Brundage, a policy research scientist, highlighting something in a Google doc. “The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.”
But OpenAI’s media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the lab’s big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arm’s length.
This hasn’t stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMind’s AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAI’s achievement. I was not compensated for this.)
And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the lab’s influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: “In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI,” says a line under the “Policy” section. “Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message.” Another, under “Strategy,” reads, “Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to.”
There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?
But little did people know this wasn’t the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.
There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; it’s just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, won’t be enough.
Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.
Brockman and Sutskever deny that this is their sole strategy, but the lab’s tightly guarded research suggests otherwise. A team called “Foresight” runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the lab’s all-in, compute-driven strategy is the best approach.
For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didn’t know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.
In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was “sniffing around.”
In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. “We expect that safety and security concerns will reduce our traditional publishing in the future,” the section states, “while increasing the importance of sharing safety, policy, and standards research.” The spokesperson also added: “Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.”
One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.
The man driving OpenAI’s strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.
Amodei divides the lab’s strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investor’s “portfolio of bets.” Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.
As in an investor’s portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why it’s important to keep an open mind. “Pure language is a direction that the field and even some of us were somewhat skeptical of,” he says. “But now it’s like, ‘Wow, this is really promising.’”
Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun.
The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2’s sentence constructions or a robot’s movements.
Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. “At some point we’re going to build AGI, and by that time I want to feel good about these systems operating in the world,” he says. “Anything where I don’t currently feel good, I create and recruit a team to focus on that thing.”
For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.
“We’re in the awkward position of: we don’t know what AGI looks like,” he says. “We don’t know when it’s going to happen.” Then, with careful self-awareness, he adds: “The mind of any given person is limited. The best thing I’ve found is hiring other safety researchers who often have visions which are different than the natural thing I might’ve thought of. I want that kind of variation and diversity because that’s the only way that you catch everything.”
The thing is, OpenAI actually has little “variation and diversity”—a fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musk’s startup working on computer-brain interfaces, shares the same building and dining room.
According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didn’t specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)
In fairness, this lack of diversity is typical in AI. Last year a report from the New York–based research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. “There is definitely still a lot of work to be done across academia and industry,” OpenAI’s spokesperson said. “Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.”
Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New York–based company, the city just had too little diversity.
But if diversity is a problem for the AI industry in general, it’s something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.
Nor is it at all clear just how OpenAI plans to “distribute the benefits” of AGI to “all of humanity,” as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited “significant unresolved issues regarding … the way in which it would be implemented.”) “This is my biggest problem with OpenAI,” says a former employee, who spoke on condition of anonymity.
“They are using sophisticated technical practices to try to answer social problems with AI,” echoes Britt Paris of Rutgers. “It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.”
Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. “How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,” he says. “I don’t think that that strategy is likely to succeed.”
The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to “make sure that we are understanding the ramifications.”
Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldn’t functionally change OpenAI’s approach to research. Microsoft was well aligned with the lab’s values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.
For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didn’t even know what promises, if any, had been made to Microsoft.
But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altman’s message is clear: OpenAI needs to make money in order to do research—not the other way around.
This is a hard but necessary trade-off, the leadership has said—one it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.
But the truth is that OpenAI faces this trade-off not only because it’s not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategy—not because it’s seen as the only way to AGI, but because it seems like the fastest.
Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.
Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldn’t omit from this profile. “I guess in my opinion, there’s problems,” she begins hesitantly. “Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.”
“But to me, it feels like they are doing something a little bit right,” she says. “I got a sense that the folks there are earnestly trying.”
Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didn’t think it was possible to “bake ethics in… from the very beginning” when developing AI, he intended it to mean that ethical questions couldn’t be solved from the beginning, not that they couldn’t be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not “on a farm,” but “on a hobby farm.” Brockman considers this distinction important.
In addition, we have clarified that while OpenAI did indeed “shed its nonprofit status,” a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. We’ve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).
President Biden previewed an executive order in his 2022 State of the Union meant to address identity theft and fraud in public benefit programs. As Biden gears up for his 2024 address, the order still hasn’t been released.
Two years after the White House teasedan executive order on identity theft in public benefits during the 2022 State of the Union, such an order hasn’t materialized, leaving stakeholders frustrated at the lack of action to address vulnerabilities and prevent fraudsters from siphoning off government money.
“We continue to work in this area very rigorously across government,” Clare Martorana, the federal chief information officer, told Nextgov/FCW at an event this week when asked about the state of the executive order. “This is top of mind for all of us. We want to make sure that we accelerate people’s use of digital [to access government], but safely, securely.”
The order as it was previewed two years ago was said to be focused on preventing fraud in government benefits programs, which spiked during the pandemic, in part due to identity theft. The Government Accountability Office estimated in September that up to $135 billion in unemployment insurance alone went to bad actors during the pandemic.
“We are working to identify a number of actions that we believe will have a positive impact on digital identity and identity verification,” Caitlin Clarke, senior director at the National Security Council, said at an event in January. “We will be looking to have more work in this area coming out soon.”
Clarke noted that identity is often either the culprit or target in cyber incidents, with bad actors exploiting identity and access management vulnerabilities and targeting identity information to monetize and use it to further additional fraud and cybercrime.
Specifics remain unclear. The Office of Management and Budget declined to respond to questions for this story about the state of the executive order or reasons for its delay. The White House did not respond to outreach for this story.
Political optics, Login.gov and the ‘abysmal’ lack of progress
A 2022 White House fact sheetdesignated Gene Sperling, White House senior advisor and American Rescue Plan coordinator, and the Office of Management and Budget as responsible for making recommendations on preventing public benefits fraud that will be incorporated into the promised executive order.
According to sources familiar with the development of the order, Sperling has been driving its development. He toldreporters last spring that it “should be out soon,” although he noted that getting legal sign-off can be a lengthy process.
The order’s creation has largely been conducted behind closed doors, not open even to other senior White House officials, according to the sources familiar with its development, who added that Sperling has also restricted the cross-agency engagement typically done for policy development.
Identity proofing solutions can be politically touchy, as was evident in early 2022 when the IRS faced bipartisan pushback over its use of facial recognition via vendor ID.me.
Those political optics for any forthcoming order have been on Sperling’s mind, according to sources familiar with development of identity policy. Challenges surrounding the government’s single sign-on service, Login.gov, have also factored into the delays.
Although 47 agencies and states use Login.gov, some agencies have been reluctant to do so.
The IRS, for example, still hasn’t added the service as a gateway to the agency’s online accounts — even after it said it planned to do so in 2022 — as IRS tech officials have hesitated to trust Login.gov’s security features.
A draft of the executive order from early 2023 indicated the White House planned to give a leading role to Login.gov.
But weeks later, the General Services Administration’s Login.gov team landed in hot water for not meeting government digital identity standards, due to their lack of facial recognition capabilities, and for misleading agencies about their compliance. GSA has since announced it would add facial recognition capabilities to Login.gov.
It’s unclear whether and how the focus of any potential executive order has shifted since.
“Meanwhile, the problems tied to identity theft and identity-related cybercrime continue to compound,” said Jeremy Grant, former senior executive advisor for identity management at the National Institute for Standards and Technology, who now runs a trade association focused on digital identity issues, the Better Identity Coalition. “The lack of progress here has been abysmal.”
“The lack of any easy, privacy-preserving way for Americans to protect their identity online is being actively exploited by organized criminals and hostile nation-states like China, Russia and North Korea,” said Grant.
The Better Identity Coalition wants the government to push for identity proofing systems, including mobile drivers licenses, and to have more agencies provide attribute validation services that can be used to triangulate if someone is who they say they are on the internet. Others saythat the government also needs to better help victims of identity theft.
The White House’s own 2023 cybersecurity strategy included action items to invest in digital identity solutions and update related standards. But the strategy’s implementation plan left out digital identity.
“Weaknesses in [identity] are a threat to our national and economic security,” said Carole House — formerly a director of cybersecurity and secure digital innovation at NSC who now works at Terranet Ventures — who also spoke at the January event.
“It’s a major vulnerability that demands coordinated, strategic action led by the White House,” she said. “Yet we’ve seen no evidence of coordinated architecture and structure for the kinds of efforts that are needed to create the fabric for the future of a digital economy.”
FORT MEADE, Md. – The National Security Agency (NSA) is releasing “Top Ten Cloud Security Mitigation Strategies” to inform cloud customers about important security practices as they shift their data to cloud environments. The report is a compilation of ten Cybersecurity Information Sheets (CSIs), each on a different strategy. The Cybersecurity and Infrastructure Security Agency (CISA) joins NSA as a partner on six of the ten strategies.
The ten strategies are covered in the following reports:
“Using the cloud can make IT more efficient and more secure, but only if it is implemented right,” said Rob Joyce, NSA’s Director of Cybersecurity. “Unfortunately, the aggregation of critical data makes cloud services an attractive target for adversaries. This series provides foundational advice every cloud customer should follow to ensure they don’t become a victim.”
The CSI for each strategy includes an executive summary providing background information and details about threat models. Additionally, each CSI concludes with best practices and additional guidance.
The PPBE Reform Commission has concluded that a new approach to defense resourcing is required to better maintain the security of the American people. Today, we are pleased to release our Final Report, which makes 28 recommendations critical to establishing a new Defense Resourcing System and advancing reforms to the United States Department of Defense’s current PPBE process.
You can access our Final Report and associated fact sheet on our website here: https://lnkd.in/eRAcMhpn.
Ellen Lord Jonathan Burks Lisa Disbrow Eric Fanning Peter Levine Jamie Morin David Norquist Diem Salmon Jennifer Santos Arun Seraphin Raj S. John Whitley #ppbe #ppbereform #defense #defenseinnovation
NEWS RELEASE
MARCH 6, 2024
The Commission on Planning, Programming, Budgeting, and Execution (PPBE) Reform is pleased to release a Final Report with 28 recommendations critical to reforming Department of Defense (DoD) resourcing processes to meet the demands of the current security environment. These recommendations affect all aspects of the current PPBE process, from strategy and planning to execution.
The Honorable Robert Hale, Commission Chair, expressed appreciation for the significant amount of work and input contributing to this Final Report. “We conducted more than 400 interviews with 1100 interviewees and also engaged in extensive research. We are grateful for the contributions from Congress, the Department of Defense, industry, academia, and all who helped the Commission. We believe that the 28 recommendations in our Final Report will bring about meaningful reforms, and we could not have accomplished our work over the last 24 months without extensive input from across the PPBE ecosystem.”
As a result of its research and interviews, the Commission on PPBE Reform recommends that the DoD adopt a new resourcing system. The Defense Resourcing System preserves the strengths of the current PPBE process, while also better aligning strategy with resource allocation and allowing the DoD to respond more effectively to emerging threats and technological advances.
The Honorable Ellen Lord, Commission Vice Chair, addresses the extensive impacts of the Commission’s recommended reforms. “Our consensus recommendations represent a wide range of reforms ranging from tactical to transformational. They include business system improvements and changes that strengthen the resourcing workforce. Each will help the Department and Congress work together to deliver improved capabilities to the warfighter faster. Some of these reforms will take substantial work and time to implement, but they are worth it—we cannot continue with the status quo and must outpace strategic competitors.”
Congress established the Commission on PPBE Reform in the National Defense Authorization Act for Fiscal Year 2022 to conduct a comprehensive assessment of all four phases of the PPBE process, with a specific focus on budgetary processes affecting defense modernization. The Commission will continue engagement on recommendations until its conclusion in August 2024.
One reason the Defense Department can’t get to a clean financial audit has to do with its multiple and outdated financial management systems. The DoD does have
One reason the Defense Department can’t get to a clean financial audit has to do with its multiple and outdated financial management systems. The DoD does have a plan to modernize the systems, but the Office of Inspector General (OIG) finds a little trouble with how officials are going about it. For the latest, the Federal Drive with Tom Temin talked with OIG project manager Chris Hilton and Shelby Barnes.
Interview Transcript:
Tom Temin And fair to say, this was an audit, not so much of DoD finances, but of the systems that make up the financial network there and of their plans to modernize it. That a good way to put it?
Shelby Barnes Yes. So I think that’s a great way to summarize what this audit was. We focused on the DoD financial systems specifically. We reviewed the systems that were subject to the Federal Financial Management Improvement Act. Essentially, this is a law that requires that systems capture data and record transactions properly. And the DoD has established goals to, as you said, modernize its systems environment and to update its systems or stop using some of its old systems by 2028. However, what we found in our audit was that goal wasn’t aggressive enough. And without a more modern systems environment, we found that the DoD will just continue to spend a lot of money on systems that don’t record those transactions properly.
Tom Temin And just to define the scope of this, it’s not just the Pentagon and the fourth estate agencies, but does this also include the armed forces and they’re often multiple financial systems? Chris Hilton Yes. It definitely includes all of those systems and all those parts and pieces of the DoD. We looked at basically any plans related to maintaining the DoD’s IT system environment and how they impact the DoD financial statements. By the numbers DoD’s IT environment contains over 400 systems and applications and over 2000 interfaces. This complex environment contributes to many of the DoD challenges.
Tom Temin Right. And it’s not simply the multiplicity of them, but in some cases, the age of them and the fact that they can’t interoperate with one another in some cases. Fair to say? Chris Hilton That is absolutely correct. I think some of the systems that the DoD still uses today are from the 1950s and 1960s and 1970s. Obviously, they weren’t necessarily always intended to produce financial statements. That’s a newer requirement. So those are some of the challenges that the department is dealing with.
Tom Temin Right. Because in the 1950s and 1960s, they could count the beans, so to speak, but they don’t meet what are considered contemporary standards for financial systems.
Chris Hilton Correct.
Shelby Barnes Yes. That’s correct.
Tom Temin Plus, there’s a certain cost in maintaining these old systems, and the multiplicity is a cost multiplier itself. Fair to say. Chris Hilton That is fair to say. One of our highlights in our report is that the DoD maintains 37 purchasing systems throughout all its components and pieces. And obviously, that presents challenges from the perspective of, well, if you have a challenge across 37 systems, and you have to have 37 corrective actions, so that does present significant challenges for the department.
Tom Temin Right. And you mentioned they have 400 systems with 200 interfaces. So that’s even beyond the purchasing systems. Chris Hilton 2,000 interfaces. I wish it was 200. Tom Temin Yeah, I didn’t write the third one down on my sheet here. Ok, so we’ve got the full scope of that. And let’s talk about the scope of the plan. That is to say, what do they hope to do by 2028 at this point. What’s their envision for all of this.
Shelby Barnes Yeah. So that’s actually one of the things that we identified within our audit that wasn’t particularly clear. The DoD has multiple plans, all of which focus on a simplified systems environment. That is the department’s desire and that is the DoD’s goal. But what we found was that the plans didn’t clarify what systems the DoD plans to keep and what systems they plan to retire between now and 2028. And so that was one of the things that we highlighted within our report, that the DoD does need to clarify what systems it plans to update, to modernize, and which of those systems it needs to stop using. And we recommended that they stop using them as swiftly as possible.
Tom Temin Right. It sounds therefore like the plan is more of a guidance to a future vision than a detailed modernization plan.
Shelby Barnes Yes, I would say that’s exactly what we found within our audit.
Tom Temin We’re speaking with Shelby Barnes and Chris Hilton. They are project managers in the Office of Inspector General at the Defense Department. And did you find that they’re putting sufficient resources against this modernization effort? And is it in the right place? That is it a CIO project? Is it a CFO project or is it across different boundaries? Chris Hilton I would say there are definitely putting a lot of resources in the area. I think our audit found that there was approximately $4 billion they spent in 2022 on these financial management system. And I think that’s one of the challenges we identified, obviously, from the perspective of you’re spending so much on these systems that aren’t going to get you where you want to go in the current year. And if you just kind of do things as swiftly as possible, like Shelby mentioned, they will get the department to a lot better place.
Tom Temin I mean, is there a strategy to say take within one of the armed services, for example, or in something like [Defense Information Systems Agency (DISA)], which is a large component agency, and just consolidate within that piece that component, which would maybe eliminate dozens. And then try to get the Air Force and the Army and DISA together. I’m just making that up, but that idea.
Shelby Barnes There definitely are goals that each of, you mentioned, like the Army, Navy and Air Force, they all have their own goals, the plans that we were looking at work for the entire DoD. So I think that what you’re speaking about definitely exists at that individual component level. Our review just determined at the entire DoD level. Was the plan detailed enough to get the department where it wants to go?
Chris Hilton I would also add to that that there’s significant initiatives there to move the department in the right direction, and there are indications that they’re doing so. I know, for example, U.S. Marine Corps, they transition to a modern ERP in an effort to attain a clean on their opinion. So there is definitely traction there. I think one of the biggest things talking about, like it being a CIO challenge or a CFO challenge or a military department challenge, is really a team effort. And this is one thing that Mr. Stephens, the deputy chief financial officer, has really focused on. This is a team effort being DoD. DoD is not going to get across the finish line without everyone pushing in the same direction. So that’s one thing that has been a laser focus of the department. It’s really like this is a team effort, both horizontally across CIO and CFO, but also vertically down to the components and up to DoD.
Tom Temin And what were your major recommendations then?
Shelby Barnes So one of the most significant recommendations that we made was for the department to create a strategy where it basically determines for all of its systems, whether or not they’re going to update their system or if they are going to retire and stop using that system. Essentially, the DoD needs to we believe that the strategy is important because the DoD really needs to wrap their arms around what they have now, and they need to determine what’s going to remain and get those systems updated so that they can start producing good and reliable data.
Tom Temin And these financial systems, are these a subset of the business systems that comprise the DoD? Because they’ve had several runs at business system modernizations over the years, at least the 20 years I’ve been looking at it closely. There have been several gambits to try to get around the business systems, financial systems, a subset here? Chris Hilton Yeah, there are actually, approximately 4600 DoD IT systems, and only about 5% of them currently fall in the category of financial management systems. So it’s a actually a quite small subset of the bigger DoD system environment. And obviously trying to get our arms or DoD trying to get its arms around that environment is needed, obviously, to produce good financial data and hopefully obtain an audit opinion.
Tom Temin And in general on the plan they have, which doesn’t have the detail that you feel they do need, but their plan to 2028, is this basically an in-house effort or do they have integrator support and programmer contractor support? Chris Hilton It’s kind of a mixed bag. I mean, obviously there’s a lot of contractor support in this effort. So it is diverse I guess, in how they’re addressing the issue.
Tom Temin All right. And would you say that this is an urgent set of recommendations, this audit. And this publication. Shelby Barnes I would say yes. We feel that this audit report and this recommendation is really imperative. We know that the DoD is working very hard and putting a lot of resources towards modernizing its systems. But we feel that some of the recommendations within this report are really going to put the department on the right track to modernize their system environment, maybe quicker, and that has a direct impact on so many things operationally. And then also the financial statement office.
Tom Temin And your memorandum went to the secretary, the deputy secretary, the undersecretary, the Comptroller, the CIO, the auditors, and so on of the different armed services. They know they’ve got a problem, fair to say. Chris Hilton That’s fair to say. Tom Temin And did they generally concur with your recommendations? Chris Hilton Yes. Actually, we had 31 recommendations, quite a few. They concurred with all but one, and the one that they didn’t concur with we did ask for further comments. And I think we’re kind of headed in the right direction with that one as well. So they know it’s a problem. That’s one thing we did find during our audit was there’s already a lot of efforts going forward. We’re just making sure that they’re best positioned to make maintain systems that produce good data, uses taxpayer dollars efficiently. And like Shelby said, obtain an audit opinion by 2028.
Tom Temin And in the meantime, we could use a few years without continuing resolutions that might help.
The Justice Department’s Antitrust Division, Federal Trade Commission and Department of Health and Human Services jointly launched a cross-government public inquiry into private-equity and other corporations’ increasing control over health care.
Private equity firms and other corporate owners are increasingly involved in health care system transactions, and, at times, those transactions may lead to a maximizing of profits at the expense of quality care. The cross-government inquiry seeks to understand how certain health care market transactions may increase consolidation and generate profits for firms while threatening patients’ health, workers’ safety, quality of care and affordable health care for patients and taxpayers.
All market participants — including patients, consumer advocates, doctors, nurses, health care providers and administrators, employers, insurers and more — are invited to share their comments in response to the Request for Information. The agencies seek comments on a variety of transactions, including those involving dialysis clinics, nursing homes, hospice providers, primary care providers, hospitals, home health agencies, home- and community-based services providers, behavioral health providers, as well as billing and collections services.