healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Calling all student leaders! IBM Quantum is looking for more teams to participate in the Qiskit Fall Fest.

Posted by timmreardon on 08/26/2023
Posted in: Uncategorized.

The Qiskit Fall Fest is a collection of quantum computing events on university campuses worldwide. We’re partnering with students from 30+ different schools to help plan and support quantum computing and Qiskit-themed events this fall. At last year’s Qiskit Fall Fest, we saw student leaders organize overnight hackathons, workshops for hundreds of students, social events at local museums, coding competitions, and more. This year we have a few spots left, and are opening up some of those last spots to you!

For those of you who may be a little too busy to host large-scale hackathons or workshops, we’re also offering smaller activities that you can do in smaller groups to build your skills.

If you’re a student and this sounds like something you’d like to participate in, come to a special info session on Wednesday, August 30th, at 10am ET. Scan the QR code below or follow this link to learn more: https://ibm.co/45K2bNy

We’ll see you this Fall!

Article link: https://www.linkedin.com/posts/ibm-quantum_calling-all-student-leaders-ibm-quantum-activity-7100839407707934721-aMKV?

Navy says it’s achieved big UX improvements amid DoD effort to ‘fix our computers’ – Federal News Network

Posted by timmreardon on 08/25/2023
Posted in: Uncategorized.

Jared Serbu@jserbuWFED

August 24, 2023 7:50 am

Listen:

https://federalnewsnetwork.com/wp-content/plugins/hbi-dc-shows/podigee-podcast-player/build/podigee-podcast-player.html?id=16a6f028&iframeMode=script

Read

Up until this summer, it wasn’t uncommon for Navy IT users, even at the most senior ranks in the Pentagon, to plan part of their mornings around the 10 minutes it took for their computers to boot. But as part of a concerted effort to improve user experience, the service has shown it’s possible to cut those maddening daily waits to only about 30 seconds.

The dramatic improvements are part of a broader push across the Defense Department to improve user experience — spurred in part by a viral social media post that implored Defense officials to “fix our computers” — a Defense Business Board study that found 80% of employees are deeply dissatisfied with government IT, and direction from the deputy Defense secretary to start solving the problem.

Although the Navy’s efforts are still only in pilot stages that began with relatively small populations of users inside the Pentagon, officials believe they’ve learned enough about the root causes to start making bigger changes that will improve the average sailor or civilian’s experience at bases around the world within the year.

“What we did at the Pentagon early on was we started a playbook so that we could crowdsource this to other bases who were volunteering, and who were leaning forward,” Justin Fanelli, the Department of the Navy’s acting chief technology officer, said during an extended interview about the UX improvement effort on Federal News Network’s On DoD. “The volunteer queue has grown — in some cases they’ve started proactively, and in some cases they’re begging … a couple of the folks in the pilot have said, ‘Please, let’s scale this and not end it. If I have to switch back from this IT experience to what I had before, they would have to take this computer out of my clutching hands.’”

Fanelli said the UX improvements the Navy hopes to make in the coming months and years will have to be multi-pronged — the department knew at the outset that there wouldn’t be a single answer to its users’ frustrations, given the worldwide diversity in how those employees connect to networks and other local conditions.

But one thing the pilots have proven out is that, at least in most cases, fixing the computers isn’t about the hardware capabilities of the laptops and desktops themselves. The challenges have a lot more to do with bandwidth at individual worksites, and with software bloat on those endpoints — a years-long accretion of things like duplicative security and management tools that bog down otherwise-capable computers.

“Hardware refreshes have helped, but in more cases, the issue is the sprawl of software without necessarily one owner on top of all of it,” Fanelli said. “So we worked on a new operating system baseline. We had three different groups — two outside of the Department the Navy and one inside of the Navy — and we shark tanked whose image of the operating system was highest-performance. On the winner, we’re regularly seeing over 18x improvement on boot times. And we now get emails from E-3s and admirals alike saying, ‘Wow, this is much, much better.’ And that’s something that we want to scale to everyone as soon as possible.”

Some of the network-related challenges will take longer, particularly on bases that still use decades-old copper infrastructure and technologies like time-division multiplexing in the “last mile” between fast fiber networks and office spaces.

“For new construction, it’s a no-brainer to use newer technologies. We’ve piloted and we’ve gotten smarter on how to apply them on military bases in the last six months,” Fanelli said. “And for sites that are maybe seven years old, we’re pretty confident that you can remediate that through configuration as opposed to rewiring. But the 20-years-old-plus site rewiring [will have to be] part of a normal cycle. This is the Golden Gate Bridge of upgrades — you’re always doing some upgrade everywhere. But that normal cycle and figuring out how to do that differently, and cheaper, has been one that we’ve learned on … I wouldn’t say that the goal is to overhaul all transport by any means. It’s hooking up to the right solution for the right problem.”

One thing that’s helping the Navy figure out the right solution to the right problems is a drastic increase in proactive monitoring on individual IT endpoints.

For example, those figures about improvements in boot time aren’t just anecdotal or guesswork. They’re based on real-world metrics the service is gathering from performance measurement tools that are now installed on a sample of desktops and laptops at every base. The Navy can now gather data on what the user experience is like on 27,000 individual endpoints — up from just 200 at the start of the pilots.

“That takes us to a sample of about 6%, and it tells us how desktops at each site, each echelon, each systems command are performing,” Fanelli said. “It moves us out of being reactive, where we only have enough information to troubleshoot problems. Now we know who’s going to call before they call, and in some cases, we’ve solved the problem before they knew to call the help desk. Those are the real success stories that we’re after. We want as many people as possible to not have to think about their IT on a daily basis.”

Apart from the sheer number of UX-focused pilots the Navy has been conducting — there have been more than 20 this year — another reason the Navy’s been able to find fixes relatively quickly is that it’s been building on work already done in other parts of DoD.

The Air Force, in particular, helped the Navy start “on second base,” Fanelli said.

In a Medium post on Tuesday, Colt Whittall, the Air Force’s chief experience officer, noted that his service has also been aggressively monitoring end-device performance, and has seen similar results by proactively solving problems, surveying users, replacing outdated hardware and taking several other steps focused on UX improvement.

“In 2020 and 2021, dissatisfied users outnumbered satisfied users. In 2023 that’s reversed. Satisfied users outnumber dissatisfied users about two to one,” he wrote. “That is extraordinary progress for that period of time. Activity Response Time of Outlook, a key metric we follow, has improved significantly on the vast majority of bases, often by 50% or more.”

And Fanelli emphasized the effort to “fix our computers” is very much a joint effort, including via regular meetings and conversations with DoD’s chief digital and artificial intelligence office — where, by the way, the author of one of those viral social posts now works.

“The difference has been the number of folks who are hungry for change coming to the table, being willing to lean forward, working hard when no one’s looking, and we’re receiving outcomes in spades as a result of that,” he said. “If there are hungry people who want to continue to engage in this fight, we’re looking for civil servants who want to make things happen. We’re looking for vendors and partners with a bias towards action, and we’re going to go hard until our warfighters are happy.”

Article link: https://federalnewsnetwork.com/on-dod/2023/08/navy-says-its-achieved-big-ux-improvements-amid-dod-effort-to-fix-our-computers/?readmore=1

New Army CIO wants to trade bureaucracy for speedier modernization – Defense News

Posted by timmreardon on 08/24/2023
Posted in: Uncategorized.

AUGUSTA, Ga. — Leonel Garciga, the U.S. Army’s new chief information officer, is known as a “bureaucracy hacker” in some circles.

With such a moniker, indicating his dislike for red tape, come expectations. And one month into the role, he indicated he’s ready to sidestep outdated or unwieldy policy for much-needed modernization.

“We’ve got to move fast, right? We have to be able to adapt. We cannot be stuck with the bureaucracy,” Garciga said at the AFCEA TechNet Augusta conference in Georgia on Aug. 16. “I live for people telling me why I can’t do something that’s written down, or that I’m already allowed to do, because of an interpretation.”

Garciga was named CIO in July. He succeeds Raj Iyer, who after nearly three years atop the Army information-technology behemoth rejoined private industry.

Garciga previously served as the top tech officer for Army intelligence and spent years at the Department of Defense’s improvised explosive device research arm. He is also a Navy veteran.

“We’ve got a lot — a lot — of folks in critical positions right now that are all about hacking that bureaucracy and not allowing, in some cases, decades of practice remain that don’t need to,” Garciga said at the conference. “As an intel guy, when I was at acquisition and sustainment, when I was at DoD CIO, I never ran into a real thing where the policy said I couldn’t do it.”

The Defense Department has long been chided for its slow-to-adapt nature. Garciga said he plans to slash bureaucratic bloat and get “a lot better” at delivering on promises already made.

The Army, the military’s largest service, is pushing what it calls digital transformation: the phasing in of new technologies, connectivity and online practices. The service in fiscal 2023 sought $16.6 billion in cyber and IT funding, or roughly 10% of the overall budget blueprint.

“Cloud? Let’s run as fast as we can, let’s learn as fast as we can. Defensive cyber? Let’s move as fast as we can, learn as fast as we can,” he said. “We’re in, kind of, the next stage. A little bit of the foundation is in place, and now we’ve got to pick up all the pieces.”

About Colin Demarest

Colin Demarest is a reporter at C4ISRNET, where he covers military networks, cyber and IT. Colin previously covered the Department of Energy and its National Nuclear Security Administration — namely Cold War cleanup and nuclear weapons development — for a daily newspaper in South Carolina. Colin is also an award-winning photographer.

Article link: https://www.defensenews.com/battlefield-tech/it-networks/2023/08/17/new-army-cio-wants-to-trade-bureaucracy-for-speedier-modernization/?

Five Reasons Software Is Eclipsing Hardware In Pentagon Technology Plans – Forbes

Posted by timmreardon on 08/23/2023
Posted in: Uncategorized.

Loren Thompson Senior Contributor

I write about national security, especially its business dimensions.

Aug 14, 2023, 11:02am EDT

On August 11, the editors of Aviation Week & Space Technology posted a podcast with the provocative title, “Why AmazonAMZN Could Be The Next Big Defense Prime.” The discussion wasn’t so much about Amazon as about how software-driven projects increasingly are shaping military modernization plans.

AvWeek’s Chief Technology Editor, Graham Warwick, noted in the exchange that the Air Force’s next-generation fighter will be largely defined by its software, without which the aircraft would be unable to meet its performance requirements.

Some observers believe that the evolution of combat aircraft is inexorably progressing towards a future in which human pilots will no longer be part of the design, and software will enable every facet of aircraft operations—presumably with speed and precision that no human operator could match.

It is easy to overstate the degree to which software is eclipsing hardware in current military plans. What good is the source code without the plane? However, the Pentagon’s latest software modernization strategy, released last year, expresses what looks to be the conventional wisdom in asserting “software increasingly defines military capabilities.”

Viewed from this perspective, the Pentagon’s recent embrace of artificial intelligence is just the latest chapter in a long-running trend that is concentrating military power in algorithms and code rather than human hands.

Pentagon officials insist that life-and-death decisions will never be turned over to machines, but the reality is that if adversaries like China follow the same course, at some point the only alternative to automating warfare may be to accept defeat.

Here are five reasons why software increasingly dominates the thinking of military visionaries, sometimes to the exclusion of traditional industrial (and human) processes.

Software enhances the performance of hardware. The electronic content of warfighting systems has been growing for generations, and with the advent of the digital revolution, that content increasingly includes high-power computers that run applications software for onboard systems.

The result has been huge gains in functionality. Don’t take my word for it, just compare the performance of your current iPhone with the cellphone you used ten years ago. The incredible versatility concentrated in this compact device is largely enabled by software, and supported by a network that is itself software-driven.

A similar dynamic applies to military technology. The 75 improvements that will be integrated in the next round of F-35 fighter upgrades will depend on agile software running on more powerful computers. The same is true of ongoing upgrades to the Aegis combat system on destroyers—without the underlying software, they would be literally impossible.

Software takes less time to develop. Today’s military software can contain millions of lines of code, but it is easier to develop and field than hardware. It typically consists of modular building blocks constructed according to open architecture principles, and much of the code is itself generated using software. In other words, the generation of software is automated in a way that construction of fighters or warships cannot be.

A senior shipbuilding executive involved in the construction of warships once remarked to me that the state of play in his yards reflected decisions made by Congress seven years earlier. That tells you something about how long it takes to build complex military hardware. The Air Force’s F-35 became operational 15 years after the contract was awarded.

Software generally is developed and fielded according to more compressed timelines. In fact, all the steps from design to development to testing to installation can be accomplished in a fraction of the time required for new hardware. So, the acquisition system naturally defaults to software as the preferred way of upgrading capabilities.

Software is less expensive to implement. Developing and producing new military hardware involves big investments in capital equipment and the creation of articulated supply chains. A skilled workforce must be trained to integrate components unique to a particular program.

Such challenges are not unheard of in generating software, but they usually require far less financial resources to overcome. One reason is that coding of software often is fungible across diverse applications and industries—hence AvWeek’s notion that Amazon skills might be useful in advancing military capabilities. The Pentagon’s search for commercial technologies relevant to warfighting is grounded largely in leveraging private-sector software skills for new uses.

Software can take the place of costly personnel. Replacing humans with software may raise ethical concerns for the warfighting profession, but it potentially has big budgetary benefits. Retired Major General Arnold Punaro, a legendary Washington insider, figures that the fully loaded cost of a single soldier in the All-Volunteer Force is $400,000 annually. Even at that steep price, the services are having trouble attracting new recruits.

Many military jobs can be performed more cost-effectively by using software in imaginative ways. That is particularly true with the advent of artificial intelligence programs using deep-learning processes. With the federal government spending a trillion dollars more than it takes in each year, the fiscal appeal of substituting software for people will become increasingly attractive—and not just in the military.

Software lowers barriers to entry. Policymakers frequently complain that high barriers to entry in the defense industry limit options for introducing new products and processes. Greater reliance on software potentially ameliorates this problem, because there are hundreds of successful commercial software firms that can apply their skills to military tasks. Even when they perform as subcontractors to traditional primes, such firms can stimulate the adoption of new ideas.

The above five considerations barely scratch the surface of reasons why software is eclipsing hardware in military technology plans. As Grahan Warwick points out in the August 11 podcast, even when the subject is hardware, the underlying processes (like prototyping) are increasingly software-driven. The digital revolution is transforming the technological landscape, and agile software has become the coin of the realm.

Article link: https://www-forbes-com.cdn.ampproject.org/c/s/www.forbes.com/sites/lorenthompson/2023/08/14/five-reasons-software-is-eclipsing-hardware-in-pentagon-technology-plans/amp/

New post-quantum cryptography guidance offers first steps toward migration – Nextgov

Posted by timmreardon on 08/22/2023
Posted in: Uncategorized.

By ALEXANDRA KELLEYAUGUST 21, 2023 05:31 PM ET

Several agencies partnered to release the first federal recommendations for organizations to begin upgrading their networks and systems to quantum cryptography-resilient security schemes.

Federal agencies are leading the charge to usher in the shift to post-quantum cryptography standards, releasing an authoritative factsheet Monday surrounding PQC standards and the impact of quantum information technologies. 

The Quantum-Readiness: Migration to Post-Quantum Cryptography fact sheet — co-released by the Cybersecurity and Infrastructure Security Agency, the National Security Agency and the National Institutes of Standards and Technology — lays out a roadmap for public and private entities to use as quantum computing technologies continue to advance and potentially threaten the standard cryptographic schemes that safeguards modern digital infrastructure.

Among the recommendations included in the roadmap is an emphasis on close communication between organizations and technology vendors — a reflection of the Biden administration’s goal of fostering better partnerships between the public and private sectors.

“It is imperative for all organizations, especially critical infrastructure, to begin preparing now for migration to post-quantum cryptography,” said CISA Director Jen Easterly in a statement. “CISA will continue to work with our federal and industry partners to unify and drive efforts to address threats posed by quantum computing. Our collective aim is to ensure that public and private sector organizations have the resources and capabilities necessary to effectively prepare and manage this transition.”

Upgrading today’s cryptography to be able to withstand a potential attack from a fault-tolerant quantum computer is a long overhaul. CISA and NIST recommend in their new document to begin by having firms analyze which parts of their network systems and assets rely on quantum-vulnerable cryptography; that is, which network components create and validate security measures like digital signatures.

“Having an inventory of quantum-vulnerable systems and assets enables an organization to begin the quantum risk assessment processes, demonstrating the prioritization of migration,” the document says. 

The agencies prioritize technology vendors’ role in facilitating migration efforts. Given that PQC migration will involve software and occasional firmware updates, the document prompts vendors to chart their own timelines for PQC migration efforts for successful product integration. 

“The authoring agencies also urge organizations to proactively plan for necessary changes to existing and future contracts,” the document says. “Considerations should be in place ensuring that new products will be delivered with PQC built-in, and older products will be upgraded with PQC to meet transition timelines.”

This new guidance supports the timelineimposed by a Biden administration memorandum, which requests that government agencies modernize their networks to PQC standards by the year 2035. 

“Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers,” said Rob Joyce, director of NSA Cybersecurity. “The transition to a secured quantum computing era is a long-term intensive community effort that will require extensive collaboration between government and industry. The key is to be on this journey today and not wait until the last minute.”

Article link: https://www.nextgov.com/emerging-tech/2023/08/new-post-quantum-cryptography-guidance-offers-first-steps-toward-migration/389595/

Proof That Positive Work Cultures Are More Productive – HBR

Posted by timmreardon on 08/20/2023
Posted in: Uncategorized.
  • Emma Seppälä
  • Kim Cameron

December 01, 2015

Too many companies bet on having a cut-throat, high-pressure, take-no-prisoners culture to drive their financial success.

But a large and growing body of research on positive organizational psychology demonstrates that not only is a cut-throat environment harmful to productivity over time, but that a positive environment will lead to dramatic benefits for employers, employees, and the bottom line.

Although there’s an assumption that stress and pressure push employees to perform more, better, and faster, what cutthroat organizations fail to recognize is the hidden costs incurred.

First, health care expenditures at high-pressure companies are nearly 50% greater than at other organizations. The American Psychological Association estimates that more than $500 billion is siphoned off from the U.S. economy because of workplace stress, and 550 million workdays are lost each year due to stress on the job. Sixty percent to 80% of workplace accidents are attributed to stress, and it’s estimated that more than 80% of doctor visits are due to stress. Workplace stress has been linked to health problems ranging from metabolic syndrome to cardiovascular disease and mortality.

The stress of belonging to hierarchies itself is linked to disease and death. One study showed that, the lower someone’s rank in a hierarchy, the higher their chances of cardiovascular disease and death from heart attacks. In a large-scale study of over 3,000 employees conducted by Anna Nyberg at the Karolinska Institute, results showed a strong link between leadership behavior and heart disease in employees. Stress-producing bosses are literally bad for the heart.

INSIGHT CENTER

  • How to Be a Company That Employees LoveIt takes a careful mix of mission, management, and culture.

Second is the cost of disengagement. While a cut-throat environment and a culture of fear can ensure engagement (and sometimes even excitement) for some time, research suggests that the inevitable stress it creates will likely lead to disengagement over the long term. Engagement in work — which is associated with feeling valued, secure, supported, and respected — is generally negatively associated with a high-stress, cut-throat culture.

And disengagement is costly. In studies by the Queens School of Business and by the Gallup Organization, disengaged workers had 37% higher absenteeism, 49% more accidents, and 60% more errors and defects. In organizations with low employee engagement scores, they experienced 18% lower productivity, 16% lower profitability, 37% lower job growth, and 65% lower share price over time. Importantly, businesses with highly engaged employees enjoyed 100% more job applications.

Lack of loyalty is a third cost. Research shows that workplace stress leads to an increase of almost 50% in voluntary turnover. People go on the job market, decline promotions, or resign. And the turnover costs associated with recruiting, training, lowered productivity, lost expertise, and so forth, are significant. The Center for American Progress estimatesthat replacing a single employee costs approximately 20% of that employee’s salary.

For these reasons, many companies have established a wide variety of perks from working from home to office gyms. However, these companies still fail to take into account the research. A Gallup poll showed that, even when workplaces offered benefits such as flextime and work-from-home opportunities, engagement predicted wellbeing above and beyond anything else. Employees prefer workplace wellbeing to material benefits.

Wellbeing comes from one place, and one place only — a positive culture.

Creating a positive and healthy culture for your team rests on a few major principles. Our own research (see here and here) on the qualities of a positive workplace culture boils down to six essential characteristics:

  • Caring for, being interested in, and maintaining responsibility for colleagues as friends.
  • Providing support for one another, including offering kindness and compassion when others are struggling.
  • Avoiding blame and forgive mistakes.
  • Inspiring one another at work.
  • Emphasizing the meaningfulness of the work.
  • Treating one another with respect, gratitude, trust, and integrity.

As a boss, how can you foster these principles? The research points to four steps to try:

1. Foster social connections. A large number of empirical studies confirm that positive social connections at work produce highly desirable results. For example, people get sick less often, recover twice as fast from surgery, experience less depression, learn faster and remember longer, tolerate pain and discomfort better, display more mental acuity, and perform better on the job. Conversely, research by Sarah Pressman at the University of California, Irvine, found that the probability of dying early is 20% higher for obese people, 30% higher for excessive drinkers, 50% higher for smokers, but a whopping 70% higher for people with poor social relationships. Toxic, stress-filled workplaces affect social relationships and, consequently, life expectancy.

2. Show empathy. As a boss, you have a huge impact on how your employees feel. A telling brain-imaging study found that, when employees recalled a boss that had been unkind or un-empathic, they showed increased activation in areas of the brain associated with avoidance and negative emotion while the opposite was true when they recalled an empathic boss. Moreover, Jane Dutton and her colleagues in the CompassionLab at the University of Michigan suggest that leaders who demonstrate compassion toward employees foster individual and collective resilience in challenging times. 

3. Go out of your way to help.Ever had a manager or mentor who took a lot of trouble to help you when he or she did not have to? Chances are you have remained loyal to that person to this day.  Jonathan Haidt at New York University’s Stern School of Business shows in his research that when leaders are not just fair but self-sacrificing, their employees are actually moved and inspired to become more loyal and committed themselves. As a consequence, they are more likely to go out of their way to be helpful and friendly to other employees, thus creating a self-reinforcing cycle. Daan Van Knippenberg of Rotterdam School of Management shows that employees of self-sacrificing leaders are more cooperative because they trust their leaders more. They are also more productive and see their leaders as more effective and charismatic.

4. Encourage people to talk to you – especially about their problems. Not surprisingly, trusting that the leader has your best interests at heart improves employee performance. Employees feel safe rather than fearful and, as research by Amy Edmondson of Harvard demonstrates in her work on psychological safety, a culture of safety i.e. in which leaders are inclusive, humble, and encourage their staff to speak up or ask for help, leads to better learning and performance outcomes. Rather than creating a culture of fear of negative consequences, feeling safe in the workplace helps encourage the spirit of experimentation so critical for innovation. Kamal Birdi of Sheffield University has shownthat empowerment, when coupled with good training and teamwork, leads to superior performance outcomes whereas a range of efficient manufacturing and operations practices do not.

When you know a leader is committed to operating from a set of values based on interpersonal kindness, he or she sets the tone for the entire organization. In Give and Take, Wharton professor Adam Grant demonstrates that leader kindness and generosity are strong predictors of team and organizational effectiveness. Whereas harsh work climates are linked to poorer employee health, the opposite is true of positive work climates where employees tend to have lower heart rates and blood pressure as well as a stronger immune systems. A positive work climate also leads to a positive workplace culture which, again, boosts commitment, engagement, and performance. Happier employees make for not only a more congenial workplace but for improved customer service. As a consequence, a happy and caring culture at work not only improves employee well-being and productivity but also improved client health outcomes and satisfaction.

In sum, a positive workplace is more successful over time because it increases positive emotions and well-being. This, in turn, improves people’s relationships with each other and amplifies their abilities and their creativity. It buffers against negative experiences such as stress, thus improving employees’ ability to bounce back from challenges and difficulties while bolstering their health. And, it attracts employees, making them more loyal to the leader and to the organization as well as bringing out their best strengths. When organizations develop positive, virtuous cultures they achieve significantly higher levels of organizational effectiveness — including financial performance, customer satisfaction, productivity, and employee engagement.

Editor’s note : Due to a typo, this article initially misstated the number of workdays lost due to stress each year. That number is estimated at 550 million, not 550 billion. The sentence has been corrected.

Article link: https://hbr.org/2015/12/proof-that-positive-work-cultures-are-more-productive?

  • Emma Seppälä, PhD, is a faculty member at the Yale School of Management, faculty director of the Yale School of Management’s Women’s Leadership Program and author of The Happiness Track. She is also science director of Stanford University’s Center for Compassion and Altruism Research and Education. Follow her work at www.emmaseppala.com, on Instagramor Twitter.
  • KCKim Cameron, PhD, is the William Russell Kelly Professor of Management and Organizations at the Ross School of Business at the University of Michigan and the author of Positive Leadership, Practicing Positive Leadership, and Positively Energizing Leadership.

Yes, you can measure software developer productivity – McKinsey

Posted by timmreardon on 08/19/2023
Posted in: Uncategorized.

Measuring, tracking, and benchmarking developer productivity has long been considered a black box. It doesn’t have to be that way.

Measuring, tracking, and benchmarking developer productivity has long been considered a black box. It doesn’t have to be that way.

Compared with other critical business functions such as sales or customer operations, software development is perennially undermeasured. The long-held belief by many in tech is that it’s not possible to do it correctly—and that, in any case, only trained engineers are knowledgeable enough to assess the performance of their peers. Yet that status quo is no longer sustainable. Now that most companies are becoming (to one degree or another) software companies, regardless of industry, leaders need to know they are deploying their most valuable talent as successfully as possible.

Sidebar

About the authors

This article is a collaborative effort by Chandra Gnanasambandam, Martin Harrysson, Alharith Hussin, Jason Keovichit, and Shivam Srivastava, representing views from McKinsey’s Digital and Technology, Media & Telecommunications Practices.

There is no denying that measuring developer productivity is difficult. Other functions can be measured reasonably well, some even with just a single metric; whereas in software development, the link between inputs and outputs is considerably less clear. Software development is also highly collaborative, complex, and creative work and requires different metrics for different levels (such as systems, teams, and individuals). What’s more, even if there is genuine commitment to track productivity properly, traditional metrics can require systems and software that are set up to allow more nuanced and comprehensive measurement. For some standard metrics, entire tech stacks and development pipelines need to be reconfigured to enable tracking, and putting in place the necessary instruments and tools to yield meaningful insights can require significant, long-term investment. Furthermore, the landscape of software development is changing quickly as generative AI tools such as CopilotX and ChatGPT have the potential to enable developers to complete tasks up to two times faster.

To help overcome these challenges and make this critical task more feasible, we developed an approach to measuring software developer productivity that is easier to deploy with surveys or existing data (such as in backlog management tools). In so doing, we built on the foundation of existing productivity metrics that industry leaders have developed over the years, with an eye toward revealing opportunities for performance improvements. 

This new approach has been implemented at nearly 20 tech, finance, and pharmaceutical companies, and the initial results are promising. They include the following improvements:

  • 20 to 30 percent reduction in customer-reported product defects
  • 20 percent improvement in employee experience scores
  • 60-percentage-point improvement in customer satisfaction ratings

Leveraging productivity insights

With access to richer productivity data and insights, leaders can begin to answer pressing questions about the software engineering talent they fought so hard to attract and retain, such as the following:

  • What are the impediments to the engineers working at their best level? 
  • How much does culture and organization affect their ability to produce their best work? 
  • How do we know if we’re using their time on activities that truly drive value?
  • How can we know if we have all the software engineering talent we need?

Understanding the foundations

To use a sufficiently nuanced system of measuring developer productivity, it’s essential to understand the three types of metrics that need to be tracked: those at the system level, the team level, and the individual level. Unlike a function such as sales, where a system-level metric of dollars earned or deals closed could be used to measure the work of both teams and individuals, software development is collaborative in a distinctive way that requires different lenses. For instance, while deployment frequency is a perfectly good metric to assess systems or teams, it depends on all team members doing their respective tasks and is, therefore, not a useful way to track individual performance. 

Another critical dimension to recognize is what the various metrics do and do not tell you. For example, measuring deployment frequency or lead time for changes can give you a clear view of certain outcomes, but not of whether an engineering organization is optimized. And while metrics such as story points completed or interruptions can help determine optimization, they require more investigation to identify improvements that might be beneficial.

In building our set of metrics, we looked to expand on the two sets of metrics already developed by the software industry. The first is DORA metrics, named for Google’s DevOps research and assessment team. These are the closest the tech sector has to a standard, and they are great at measuring outcomes. When a DORA metric returns a subpar outcome, it is a signal to investigate what has gone wrong, which can often involve protracted sleuthing. For example, if a metric such as deployment frequency increases or decreases, there can be multiple causes. Determining what they are and how to resolve them is often not straightforward.

The second set of industry-developed measurements is SPACE metrics (satisfaction and well-being, performance, activity, communication and collaboration, and efficiency and flow), which GitHub and Microsoft Research developed to augment DORA metrics. By adopting an individual lens, particularly around developer well-being, SPACE metrics are great at clarifying whether an engineering organization is optimized. For example, an increase in interruptions that developers experience indicates a need for optimization. 

On top of these already powerful metrics, our approach seeks to identify what can be done to improve how products are delivered and what those improvements are worth, without the need for heavy instrumentation. Complementing DORA and SPACE metrics with opportunity-focused metrics can create an end-to-end view of software developer productivity (Exhibit 1).

These opportunity-focused productivity metrics use a few different lenses to generate a nuanced view of the complex range of activities involved with software product development.

Inner/outer loop time spent. To identify specific areas for improvement, it’s helpful to think of the activities involved in software development as being arranged in two loops (Exhibit 2). An inner loop comprises activities directly related to creating the product: coding, building, and unit testing. An outer loop comprises other tasks developers must do to push their code to production: integration, integration testing, releasing, and deployment. From both a productivity and personal-experience standpoint, maximizing the amount of time developers spend in the inner loop is desirable: building products directly generates value and is what most developers are excited to do. Outer-loop activities are seen by most developers as necessary but generally unsatisfying chores. Putting time into better tooling and automation for the outer loop allows developers to spend more time on inner-loop activities.

Top tech companies aim for developers to spend up to 70 percent of their time doing inner-loop activities. For example, one company that had previously completed a successful agile transformation learned that its developers, instead of coding, were spending too much time on low-value-added tasks such as provisioning infrastructure, running manual unit tests, and managing test data. Armed with that insight, it launched a series of new tools and automation projects to help with those tasks across the software development life cycle.

Developer Velocity Index benchmark. The Developer Velocity Index (DVI) is a survey that measures an enterprise’s technology, working practices, and organizational enablement and benchmarks them against peers. This comparison helps unearth specific areas of opportunity, whether in backlog management, testing, or security and compliance.1 For example, one company, well known for its technological prowess and all-star developers, sought to define standard working practices more thoughtfully for cross-team collaboration after discovering a high amount of dissatisfaction, rework, and inefficiency reported by developers.

Contribution analysis. Assessing contributions by individuals to a team’s backlog (starting with data from backlog management tools such as Jira, and normalizing data using a proprietary algorithm to account for nuances) can help surface trends that inhibit the optimization of that team’s capacity. This kind of insight can enable team leaders to manage clear expectations for output and improve performance as a result. Additionally, it can help identify opportunities for individual upskilling or training and rethinking role distribution within a team (for instance, if a quality assurance tester has enough work to do). For example, one company found that its most talented developers were spending excessive time on noncoding activities such as design sessions or managing interdependencies across teams. In response, the company changed its operating model and clarified roles and responsibilities to enable those highest-value developers to do what they do best: code. Another company, after discovering relatively low contribution from developers new to the organization, reexamined their onboarding and personal mentorship program. 

Talent capability score. Based on industry standard capability maps, this score is a summary of the individual knowledge, skills, and abilities of a specific organization. Ideally, organizations should aspire to a “diamond” distribution of proficiency, with the majority of developers in the middle range of competency.2 This can surface coaching and upskilling opportunities, and in extreme cases call for a rethinking of talent strategy. For example, one company found a higher concentration of their developers in the “novice” capability than was ideal. They deployed personalized learning journeys based on specific gaps and were able to move 30 percent of their developers to the next level of expertise within six months. 

Avoiding metrics missteps

As valuable as it can be, developer productivity data can be damaging to organizations if used incorrectly, so it’s important to avoid certain pitfalls. In our work we see two main types of missteps occur: misuse of metrics and failing to move past old mindsets. 

Misuse is most common when companies try to employ overly simple measurements, such as lines of code produced, or number of code commits (when developers submit their code to a version control system). Not only do such simple metrics fail to generate truly useful insights, they can have unintended consequences, such as leaders making inappropriate trade-offs. For example, optimizing for lead time or deployment frequency can allow quality to suffer. Focusing on a single metric or too simple a collection of metrics can also easily incentivize poor practices; in the case of measuring commits, for instance, developers may submit smaller changes more frequently as they seek to game the system. 

To truly benefit from measuring productivity, leaders and developers alike need to move past the outdated notion that leaders “cannot” understand the intricacies of software engineering, or that engineering is too complex to measure. The importance of engineering talent to a company’s success, and the fierce competition for developer talent in recent years, underscores the need to acknowledge that software development, like so many other things, requires measurement to be improved. Further, attracting and retaining top software development talent depends in large part on providing a workplace and tools that allow engineers to do their best work and encourages their creativity. Measuring productivity at a system level enables employers to see hidden friction points that impede that work and creativity. 

Getting started

The mechanics of building a developer productivity initiative can seem daunting, but there is no time like the present to begin to lay the groundwork. The factors driving the need to elevate the conversation about software developer productivity to C-level leaders outweigh the impediments to doing so. 

The increase in remote work and its popularity among developers is one overriding factor. Developers have long worked in agile teams, collaborating in the same physical space, and some technology leaders believe that kind of in-person teamwork is essential to the job. However, the digital tools that are so central to their work made it easy to switch to remote work during the pandemic lockdowns, and as in most sectors, this shift is hard to undo. As remote and hybrid working increasingly becomes the norm, organizations will need to rely on broad, objective measurements to maintain confidence in these new working arrangements and ensure they are steadily improving the function that could easily determine their future success or failure. The fact that the markets are now putting greater emphasis on efficient growth and ROI only makes it more important than ever to know how they can optimize the performance of their highly valued engineering talent. 

Another key driver of this need for greater visibility is the rapid advances in AI-enabled tooling, especially large-language models such as generative AI. These are already rapidly changing the way work is done, which means that measuring software developers’ productivity is only a first step to understanding how these valuable resources are deployed. 

But as critical as developer productivity is becoming, companies shouldn’t feel they have to embark on a massive, dramatic overhaul almost overnight. Instead, they can begin the process with a number of key steps:

Learn the basics. All C-suite leaders who are not engineers or who have been in management for a long time will need a primer on the software development process and how it is evolving.

Assess your systems. Because developer productivity has not typically been measured at the level needed to identify improvement opportunities, most companies’ tech stacks will require potentially extensive reconfiguration. For example, to measure test coverage (the extent to which areas of code have been adequately tested), a development team needs to equip their codebase with a tool that can track code executed during a test run. 

Build a plan. As with most analytics initiatives, getting lost in mountains of data is a risk. It’s important to start with one area that you know will result in a clear path to improvement, such as identifying friction points and bottlenecks. Be explicit about the scope of such a plan, as even the best approaches, no matter how comprehensive, will not be a silver bullet.

Remember that measuring productivity is contextual. The point is to look at an entire system and understand how it can work better by improving the development environment at the system, team, or individual level. 

No matter the specific approach, measuring productivity should ideally create transparency and insights into key improvement areas. Only then can organizations build specific initiatives to drive impact for both developer productivity and experience—impact that will benefit both those individuals and the company as a whole.

ABOUT THE AUTHOR(S)

Chandra Gnanasambandam and Martin Harrysson are senior partners in McKinsey’s Bay Area office, where Alharith Hussin and Shivam Srivastava are partners; and Jason Keovichit is an associate partner in the New York office. 

The authors wish to thank Pedro Garcia, Diana Rodriguez, and Jeremy Schneider for their contributions to this article.

Article link: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity

IBM Research introduces an analog AI chip that could make artificial intelligence (AI) more energy efficient.

Posted by timmreardon on 08/17/2023
Posted in: Uncategorized.

IBM Research introduces an analog AI chip that could make artificial intelligence (AI) more energy efficient.

The chip’s components work in a way similar to synapses in human brains.

Sejal Sharma

Sejal Sharma

| Aug 14, 2023 11:22 AM EST

Tech corporation IBM has unveiled a new “prototype” of an analog AI chip that works like a human brain and performs complex computations in various deep neural networks (DNN) tasks.

The chip promises more. IBM says the state-of-the-art chip can make artificial intelligence remarkably efficient and less battery-draining for computers and smartphones.

Introducing the chip in a paperpublished by IBM Research, the company said: “The fully integrated chip features 64 AIMC cores interconnected via an on-chip communication network. It also implements the digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units.”

Reinventing ways in which AI is computed

The new AI chip is developed in IBM’s Albany NanoTech Complex and comprises 64 analog in-memory compute cores. By borrowing key features of how neural networks run in biological brains, IBM explains that it has embedded the chip with compact, time-based analog-to-digital converters in each tile or core to transition between the analog and digital worlds. 

Each tile (or core) is also integrated with lightweight digital processing units that perform simple nonlinear neuronal activation functions and scaling operations, explained IBM in a blog published on August 10.

A replacement for current digital chips?

In the future, IBM’s prototype chip could replace the current chips powering heavy AI applications in computers and phones. “A global digital processing unit is integrated into the middle of the chip that implements more complex operations that are critical for the execution of certain types of neural networks,” further said the blog.

With more and more foundation models and generative AI tools entering the market, the performance and energy efficiency of traditional computing methods that these models run on are at a testing limit.

IBM wants to bridge that gap. The company says that many of the chips being developed today have a split in their memory and processing units, thus slowing down computation. “This means the AI models are typically stored in a discrete memory location, and computational tasks require constantly shuffling data between the memory and processing units.”

Speaking to BBC, Thanos Vasilopoulos, a scientist based at IBM’s research lab in Switzerland, compared the human brain to traditional computers and said that the former “is able to achieve remarkable performance while consuming little power.” 

He said that the superior energy efficiency (of the IBM chip) would mean “large and more complex workloads could be executed in low power or battery-constrained environments”, for example, cars, mobile phones, and cameras.

“Additionally, cloud providers will be able to use these chips to reduce energy costs and their carbon footprint,” he added.

Article link: https://m.facebook.com/story.php?story_fbid=pfbid02TVF5mzSVmh2jg65j4XDwAeYgKg5f9DkbrHRQStL9hPNbYArih2d67mbLH5yEhfxPl&id=100064843384938&mibextid=ncKXMA

Deputy Secretary of Defense Kathleen Hicks Statement on the Release of the Commission on Planning, Programming, Budgeting, and Execution Reform Interim Report

Posted by timmreardon on 08/17/2023
Posted in: Uncategorized.

Aug. 15, 2023

Attributable to Deputy Secretary of Defense Kathleen Hicks:

The Department of Defense must meet the urgency of today’s threats and tomorrow’s challenges with innovation in all portfolios — including how we build and execute our budget. This is critical not only to maintain the trust of the American taxpayer, but also to ensure that DoD can rapidly transition, integrate, and deliver cutting-edge capabilities to the warfighter at speed and scale. 

It’s no secret that DoD’s resource allocation process was born in the industrial age, and the Department has undertaken significant reforms to improve it. The recommendations made in the Commission on Planning, Programming, Budgeting, and Execution (PPBE) Reform’s Interim Report released today will further these efforts. I am directing the Department to adopt all actions that can be implemented now, as recommended by the Commission and within its purview. We look forward to working with Congress on all other proposed recommendations included in this interim report. 

The PPBE Reform Commission’s work is immensely important to assisting Congress and DoD in the nation’s efforts to stay ahead of the pacing challenge, ensuring our agility in fielding combat credible forces at speed and scale. I stand ready to provide continued DoD cooperation with the Commission and to receive its final report in March 2024.

Article link: https://www.defense.gov/News/Releases/Release/Article/3494248/deputy-secretary-of-defense-kathleen-hicks-statement-on-the-release-of-the-comm/

The AI Power Paradox – Foreign Affairs

Posted by timmreardon on 08/17/2023
Posted in: Uncategorized.
Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?
By Ian Bremmer and Mustafa Suleyman

September/October 2023 Published on August 16, 2023

It’s 2035, and artificial intelligence is everywhere. AI systems run hospitals, operate airlines, and battle each other in the courtroom. Productivity has spiked to unprecedented levels, and countless previously unimaginable businesses have scaled at blistering speed, generating immense advances in well-being. New products, cures, and innovations hit the market daily, as science and technology kick into overdrive. And yet the world is growing both more unpredictable and more fragile, as terrorists find new ways to menace societies with intelligent, evolving cyberweapons and white-collar workers lose their jobs en masse.

Just a year ago, that scenario would have seemed purely fictional; today, it seems nearly inevitable. Generative AIsystems can already write more clearly and persuasively than most humans and can produce original images, art, and even computer code based on simple language prompts. And generative AI is only the tip of the iceberg. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.

Like past technological waves, AI will pair extraordinary growth and opportunity with immense disruption and risk. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world’s primary geopolitical actors. Whether they admit it or not, AI’s creators are themselves geopolitical actors, and their sovereignty over AI further entrenches the emerging “technopolar” order—one in which technology companies wield the kind of power in their domains once reserved for nation-states. For the past decade, big technology firms have effectively become independent, sovereign actors in the digital realms they have created. AI accelerates this trend and extends it far beyond the digital world. The technology’s complexity and the speed of its advancement will make it almost impossible for governments to make relevant rules at a reasonable pace. If governments do not catch up soon, it is possible they never will.

Thankfully, policymakers around the world have begun to wake up to the challenges posed by AI and wrestle with how to govern it. In May 2023, the G-7 launched the “Hiroshima AI process,” a forum devoted to harmonizing AI governance. In June, the European Parliament passed a draft of the EU’s AI Act, the first comprehensive attempt by the European Union to erect safeguards around the AI industry. And in July, UN Secretary-General Antonio Guterres called for the establishment of a global AI regulatory watchdog. Meanwhile, in the United States, politicians on both sides of the aisle are calling for regulatory action. But many agree with Ted Cruz, the Republican senator from Texas, who concluded in June that Congress “doesn’t know what the hell it’s doing.”

Unfortunately, too much of the debate about AI governance remains trapped in a dangerous false dilemma: leverage artificial intelligence to expand national power or stifle it to avoid its risks. Even those who accurately diagnose the problem are trying to solve it by shoehorning AI into existing or historical governance frameworks. Yet AI cannot be governed like any previous technology, and it is already shifting traditional notions of geopolitical power.

The challenge is clear: to design a new governance framework fit for this unique technology. If global governance of AI is to become possible, the international system must move past traditional conceptions of sovereignty and welcome technology companies to the table. These actors may not derive legitimacy from a social contract, democracy, or the provision of public goods, but without them, effective AI governance will not stand a chance. This is one example of how the international community will need to rethink basic assumptions about the geopolitical order. But it is not the only one.

A challenge as unusual and pressing as AI demands an original solution. Before policymakers can begin to hash out an appropriate regulatory structure, they will need to agree on basic principles for how to govern AI. For starters, any governance framework will need to be precautionary, agile, inclusive, impermeable, and targeted. Building on these principles, policymakers should create at least three overlapping governance regimes: one for establishing facts and advising governments on the risks posed by AI, one for preventing an all-out arms race between them, and one for managing the disruptive forces of a technologyunlike anything the world has seen.

Like it or not, 2035 is coming. Whether it is defined by the positive advances enabled by AI or the negative disruptions caused by it depends on what policymakers do now.

FASTER, HIGHER, STRONGER

AI is different—different from other technologies and different in its effect on power. It does not just pose policy challenges; its hyper-evolutionary nature also makes solving those challenges progressively harder. That is the AI power paradox.

The pace of progress is staggering. Take Moore’s Law, which has successfully predicted the doubling of computing power every two years. The new wave of AI makes that rate of progress seem quaint. When OpenAI launched its first large language model, known as GPT-1, in 2018, it had 117 million parameters—a measure of the system’s scale and complexity. Five years later, the company’s fourth-generation model, GPT-4, is thought to have over a trillion. The amount of computation used to train the most powerful AI models has increased by a factor of ten every year for the last ten years. Put another way, today’s most advanced AI models—also known as “frontier” models—use five billiontimes the computing power of cutting-edge models from a decade ago. Processing that once took weeks now happens in seconds. Models that can handle tens of trillions of parameters are coming in the next couple of years. “Brain scale” models with more than 100 trillion parameters—roughly the number of synapses in the human brain—will be viable within five years.

With each new order of magnitude, unexpected capabilities emerge. Few predicted that training on raw text would enable large language models to produce coherent, novel, and even creative sentences. Fewer still expected language models to be able to compose music or solve scientific problems, as some now can. Soon, AI developers will likely succeed in creating systems with self-improving capabilities—a critical juncture in the trajectory of this technology that should give everyone pause.

AI models are also doing more with less. Yesterday’s cutting-edge capabilities are running on smaller, cheaper, and more accessible systems today. Just three years after OpenAI released GPT-3, open-source teams have created models capable of the same level of performance that are less than one-sixtieth of its size—that is, 60 times cheaper to run in production, entirely free, and available to everyone on the Internet. Future large language models will probably follow this efficiency trajectory, becoming available in open-source form just two or three years after leading AI labs spend hundreds of millions of dollars developing them.

As with any software or code, AI algorithms are much easier and cheaper to copy and share (or steal) than physical assets. Proliferation risks are obvious. Meta’s powerful Llama-1 large language model, for instance, leaked to the Internet within days of debuting in March. Although the most powerful models still require sophisticated hardware to work, midrange versions can run on computers that can be rented for a few dollars an hour. Soon, such models will run on smartphones. No technology this powerful has become so accessible, so widely, so quickly.

AI also differs from older technologies in that almost all of it can be characterized as “dual use”—having both military and civilian applications. Many systems are inherently general, and indeed, generality is the primary goal of many AI companies. They want their applications to help as many people in as many ways as possible. But the same systems that drive cars can drive tanks. An AI application built to diagnose diseases might be able to create—and weaponize—a new one. The boundaries between the safely civilian and the militarily destructive are inherently blurred, which partly explains why the United States has restricted the export of the most advanced semiconductors to China. 

All this plays out on a global field: once released, AI models can and will be everywhere. And it will take just one malign or “breakout” model to wreak havoc. For that reason, regulating AI cannot be done in a patchwork manner. There is little use in regulating AI in some countries if it remains unregulated in others. Because AI can proliferate so easily, its governance can have no gaps.

What is more, the damage AI might do has no obvious cap, even as the incentives to build it (and the benefits of doing so) continue to grow. AI could be used to generate and spread toxic misinformation, eroding social trust and democracy; to surveil, manipulate, and subdue citizens, undermining individual and collective freedom; or to create powerful digital or physical weapons that threaten human lives. AI could also destroy millions of jobs, worsening existing inequalities and creating new ones; entrench discriminatory patterns and distort decision-making by amplifying bad information feedback loops; or spark unintended and uncontrollable military escalations that lead to war.

Nor is the time frame clear for the biggest risks. Online misinformation is an obvious short-term threat, just as autonomous warfare seems plausible in the medium term. Farther out on the horizon lurks the promise of artificial general intelligence, the still uncertain point where AI exceeds human performance at any given task, and the (admittedly speculative) peril that AGI could become self-directed, self-replicating, and self-improving beyond human control. All these dangers need to be factored into governance architecture from the outset.

AI is not the first technology with some of these potent characteristics, but it is the first to combine them all. AI systems are not like cars or airplanes, which are built on hardware amenable to incremental improvements and whose most costly failures come in the form of individual accidents. They are not like chemical or nuclear weapons, which are difficult and expensive to develop and store, let alone secretly share or deploy. As their enormous benefits become self-evident, AI systems will only grow bigger, better, cheaper, and more ubiquitous. They will even become capable of quasi autonomy—able to achieve concrete goals with minimal human oversight—and, potentially, of self-improvement. Any one of these features would challenge traditional governance models; all of them together render these models hopelessly inadequate.

TOO POWERFUL TO PAUSE

As if that were not enough, by shifting the structure and balance of global power, AI complicates the very political context in which it is governed. AI is not just software development as usual; it is an entirely new means of projecting power. In some cases, it will upend existing authorities; in others, it will entrench them. Moreover, its advancement is being propelled by irresistible incentives: every nation, corporation, and individual will want some version of it.

Within countries, AI will empower those who wield it to surveil, deceive, and even control populations—supercharging the collection and commercial use of personal data in democracies and sharpening the tools of repression authoritarian governments use to subdue their societies. Across countries, AI will be the focus of intense geopolitical competition. Whether for its repressive capabilities, economic potential, or military advantage, AI supremacy will be a strategic objective of every government with the resources to compete. The least imaginative strategies will pump money into homegrown AI champions or attempt to build and control supercomputers and algorithms. More nuanced strategies will foster specific competitive advantages, as France seeks to do by directly supporting AI startups; the United Kingdom, by capitalizing on its world-class universities and venture capital ecosystem; and the EU, by shaping the global conversation on regulation and norms.

The vast majority of countries have neither the money nor the technological know-how to compete for AI leadership. Their access to frontier AI will instead be determined by their relationships with a handful of already rich and powerful corporations and states. This dependence threatens to aggravate current geopolitical power imbalances. The most powerful governments will vie to control the world’s most valuable resource while, once again, countries in the global South will be left behind. This is not to say that only the richest will benefit from the AI revolution. Like the Internet and smartphones, AI will proliferate without respect for borders, as will the productivity gains it unleashes. And like energy and green technology, AI will benefit many countries that do not control it, including those that contribute to producing AI inputs such as semiconductors.

At the other end of the geopolitical spectrum, however, the competition for AI supremacy will be fierce. At the end of the Cold War, powerful countries might have cooperated to allay one another’s fears and arrest a potentially destabilizing technological arms race. But today’s tense geopolitical environment makes such cooperation much harder. AI is not just another tool or weapon that can bring prestige, power, or wealth. It has the potential to enable a significant military and economic advantage over adversaries. Rightly or wrongly, the two players that matter most—Chinaand the United States—both see AI development as a zero-sum game that will give the winner a decisive strategic edge in the decades to come. 

China and the United States both see AI development as a zero-sum game.

From the vantage point of Washington and Beijing, the risk that the other side will gain an edge in AI is greater than any theoretical risk the technology might pose to society or to their own domestic political authority. For that reason, both the U.S. and Chinese governments are pouring immense resources into developing AI capabilities while working to deprive each other of the inputs needed for next-generation breakthroughs. (So far, the United States has been far more successful than China in doing the latter, especially with its export controls on advanced semiconductors.) This zero-sum dynamic—and the lack of trust on both sides—means that Beijing and Washington are focused on accelerating AI development, rather than slowing it down. In their view, a “pause” in development to assess risks, as some AI industry leaders have called for, would amount to foolish unilateral disarmament. 

But this perspective assumes that states can assert and maintain at least some control over AI. This may be the case in China, which has integrated its tech companies into the fabric of the state. Yet in the West and elsewhere, AI is more likely to undermine state power than to bolster it. Outside China, a handful of large, specialist AI companies currently control every aspect of this new technological wave: what AI models can do, who can access them, how they can be used, and where they can be deployed. And because these companies jealously guard their computing power and algorithms, they alone understand (most of) what they are creating and (most of) what those creations can do. These few firms may retain their advantage for the foreseeable future—or they may be eclipsed by a raft of smaller players as low barriers to entry, open-source development, and near-zero marginal costs lead to uncontrolled proliferation of AI. Either way, the AI revolution will take place outside government. 

To a limited degree, some of these challenges resemble those of earlier digital technologies. Internet platforms, social media, and even devices such as smartphones all operate, to some extent, within sandboxes controlled by their creators. When governments have summoned the political will, they have been able to implement regulatory regimes for these technologies, such as the EU’s General Data Protection Regulation, Digital Markets Act, and Digital Services Act. But such regulation took a decade or more to materialize in the EU, and it still has not fully materialized in the United States. AI moves far too quickly for policymakers to respond at their usual pace. Moreover, social media and other older digital technologies do not help create themselves, and the commercial and strategic interests driving them never dovetailed in quite the same way: Twitter and TikTok are powerful, but few think they could transform the global economy. 

This all means that at least for the next few years, AI’s trajectory will be largely determined by the decisions of a handful of private businesses, regardless of what policymakers in Brussels or Washington do. In other words, technologists, not policymakers or bureaucrats, will exercise authority over a force that could profoundly alter both the power of nation-states and how they relate to each other. That makes the challenge of governing AI unlike anything governments have faced before, a regulatory balancing act more delicate—and more high stakes—than any policymakers have attempted.

MOVING TARGET, EVOLVING WEAPON

Governments are already behind the curve. Most proposals for governing AI treat it as a conventional problem amenable to the state-centric solutions of the twentieth century: compromises over rules hashed out by political leaders sitting around a table. But that will not work for AI.

Regulatory efforts to date are in their infancy and still inadequate. The EU’s AI Act is the most ambitious attempt to govern AI in any jurisdiction, but it will apply in full only beginning in 2026, by which time AI models will have advanced beyond recognition. The United Kingdom has proposed an even looser, voluntary approach to regulating AI, but it lacks the teeth to be effective. Neither initiative attempts to govern AI development and deployment at the global level—something that will be necessary for AI governance to succeed. And while voluntary pledges to respect AI safety guidelines, such as those made in July by seven leading AI developers, including Inflection AI, led by one of us (Suleyman), are welcome, they are no substitute for legally binding national and international regulation.

Advocates for international-level agreements to tame AI tend to reach for the model of nuclear arms control. But AI systems are not only infinitely easier to develop, steal, and copy than nuclear weapons; they are controlled by private companies, not governments. As the new generation of AI models diffuses faster than ever, the nuclear comparison looks ever more out of date. Even if governments can successfully control access to the materials needed to build the most advanced models—as the Biden administration is attempting to do by preventing China from acquiring advanced chips—they can do little to stop the proliferation of those models once they are trained and therefore require far fewer chips to operate.

For global AI governance to work, it must be tailored to the specific nature of the technology, the challenges it poses, and the structure and balance of power in which it operates. But because the evolution, uses, risks, and rewards of AI are unpredictable, AI governance cannot be fully specified at the outset—or at any point in time, for that matter. It must be as innovative and evolutionary as the technology it seeks to govern, sharing some of the characteristics that make AI such a powerful force in the first place. That means starting from scratch, rethinking and rebuilding a new regulatory framework from the ground up.

The overarching goal of any global AI regulatory architecture should be to identify and mitigate risks to global stability without choking off AI innovation and the opportunities that flow from it. Call this approach “technoprudentialism,” a mandate rather like the macroprudential role played by global financial institutions such as the Financial Stability Board, the Bank of International Settlements, and the International Monetary Fund.Their objective is to identify and mitigate risks to global financial stability without jeopardizing economic growth.

A technoprudential mandate would work similarly, necessitating the creation of institutional mechanisms to address the various aspects of AI that could threaten geopolitical stability. These mechanisms, in turn, would be guided by common principles that are both tailored to AI’s unique features and reflect the new technological balance of power that has put tech companies in the driver’s seat. These principles would help policymakers draw up more granular regulatory frameworks to govern AI as it evolves and becomes a more pervasive force.

The first and perhaps most vital principle for AI governance is precaution. As the term implies, technoprudentialism is at its core guided by the precautionary credo: first, do no harm. Maximally constraining AI would mean forgoing its life-altering upsides, but maximally liberating it would mean risking all its potentially catastrophic downsides. In other words, the risk-reward profile for AI is asymmetric. Given the radical uncertainty about the scale and irreversibility of some of AI’s potential harms, AI governance must aim to prevent these risks before they materialize rather than mitigate them after the fact. This is especially important because AI could weaken democracy in some countries and make it harder for them to enact regulations. Moreover, the burden of proving an AI system is safe above some reasonable threshold should rest on the developer and owner; it should not be solely up to governments to deal with problems once they arise.

AI governance must also be agile so that it can adapt and correct course as AI evolves and improves itself. Public institutions often calcify to the point of being unable to adapt to change. And in the case of AI, the sheer velocity of technological progress will quickly overwhelm the ability of existing governance structures to catch up and keep up. This does not mean that AI governance should adopt the “move fast and break things” ethos of Silicon Valley, but it should more closely mirror the nature of the technology it seeks to contain. 

In addition to being precautionary and agile, AI governance must be inclusive, inviting the participation of all actors needed to regulate AI in practice. That means AI governance cannot be exclusively state centered, since governments neither understand nor control AI. Private technology companies may lack sovereignty in the traditional sense, but they wield real—even sovereign—power and agency in the digital spaces they have created and effectively govern. These nonstate actors should not be granted the same rights and privileges as states, which are internationally recognized as acting on behalf of their citizens. But they should be parties to international summits and signatories to any agreements on AI. 

Such a broadening of governance is necessary because any regulatory structure that excludes the real agents of AI power is doomed to fail. In previous waves of tech regulation, companies were often afforded so much leeway that they overstepped, leading policymakers and regulators to react harshly to their excesses. But this dynamic benefited neither tech companies nor the public. Inviting AI developers to participate in the rule-making process from the outset would help establish a more collaborative culture of AI governance, reducing the need to rein in these companies after the fact with costly and adversarial regulation.

AI is a problem of the global commons, not just the preserve of two superpowers.

Tech companies should not always have a say; some aspects of AI governance are best left to governments, and it goes without saying that states should always retain final veto power over policy decisions. Governments must also guard against regulatory capture to ensure that tech companies do not use their influence within political systems to advance their interests at the expense of the public good. But an inclusive, multistakeholder governance model would ensure that the actors who will determine the fate of AI are involved in—and bound by—the rule-making processes. In addition to governments (especially but not limited to China and the United States) and tech companies (especially but not limited to the Big Tech players), scientists, ethicists, trade unions, civil society organizations, and other voices with knowledge of, power over, or a stake in AI outcomes should have a seat at the table. The Partnership on AI—a nonprofit group that convenes a range of large tech companies, research institutions, charities, and civil society organizations to promote responsible AI use—is a good example of the kind of mixed, inclusive forum that is needed.

AI governance must also be as impermeable as possible. Unlike climate change mitigation, where success will be determined by the sum of all individual efforts, AI safety is determined by the lowest common denominator: a single breakout algorithm could cause untold damage. Because global AI governance is only as good as the worst-governed country, company, or technology, it must be watertight everywhere—with entry easy enough to compel participation and exit costly enough to deter noncompliance. A single loophole, weak link, or rogue defector will open the door to widespread leakage, bad actors, or a regulatory race to the bottom. 

In addition to covering the entire globe, AI governance must cover the entire supply chain—from manufacturing to hardware, software to services, and providers to users. This means technoprudential regulation and oversight along every node of the AI value chain, from AI chip production to data collection, model training to end use, and across the entire stack of technologies used in a given application. Such impermeability will ensure there are no regulatory gray areas to exploit. 

Finally, AI governance will need to be targeted, rather than one-size-fits-all. Because AI is a general-purpose technology, it poses multidimensional threats. A single governance tool is not sufficient to address the various sources of AI risk. In practice, determining which tools are appropriate to target which risks will require developing a living and breathing taxonomy of all the possible effects AI could have—and how each can best be governed. For example, AI will be evolutionary in some applications, exacerbating current problems such as privacy violations, and revolutionary in others, creating entirely new harms. Sometimes, the best place to intervene will be where data is being collected. Other times, it will be the point at which advanced chips are sold—ensuring they do not fall into the wrong hands. Dealing with disinformation and misinformation will require different tools than dealing with the risks of AGI and other uncertain technologies with potentially existential ramifications. A light regulatory touch and voluntary guidance will work in some cases; in others, governments will need to strictly enforce compliance.

All of this requires deep understanding and up-to-date knowledge of the technologies in question. Regulators and other authorities will need oversight of and access to key AI models. In effect, they will need an audit system that can not only track capabilities at a distance but also directly access core technologies, which in turn will require the right talent. Only such measures can ensure that new AI applications are proactively assessed, both for obvious risks and for potentially disruptive second- and third-order consequences. Targeted governance, in other words, must be well-informed governance.

THE TECHNOPRUDENTIAL IMPERATIVE

Built atop these principles should be a minimum of three AI governance regimes, each with different mandates, levers, and participants. All will have to be novel in design, but each could look for inspiration to existing arrangements for addressing other global challenges—namely, climate change, arms proliferation, and financial stability.

The first regime would focus on fact-finding and would take the form of a global scientific body to objectively advise governments and international bodies on questions as basic as what AI is and what kinds of policy challenges it poses. If no one can agree on the definition of AI or the possible scope of its harms, effective policymaking will be impossible. Here, climate change is instructive. To create a baseline of shared knowledge for climate negotiations, the United Nations established the Intergovernmental Panel on Climate Change and gave it a simple mandate: provide policymakers with “regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.” AI needs a similar body to regularly evaluate the state of AI, impartially assess its risks and potential impacts, forecast scenarios, and consider technical policy solutions to protect the global public interest. Like the IPCC, this body would have a global imprimatur and scientific (and geopolitical) independence. And its reports could inform multilateral and multistakeholder negotiations on AI, just as the IPCC’s reports inform UN climate negotiations.

The world also needs a way to manage tensions between the major AI powers and prevent the proliferation of dangerous advanced AI systems. The most important international relationship in AI is the one between the United States and China. Cooperation between the two rivals is difficult to achieve under the best of circumstances. But in the context of heightened geopolitical competition, an uncontrolled AI race could doom all hope of forging an international consensus on AI governance. One area where Washington and Beijing may find it advantageous to work together is in slowing the proliferation of powerful systems that could imperil the authority of nation-states. At the extreme, the threat of uncontrolled, self-replicating AGIs—should they be invented in the years to come—would provide strong incentives to coordinate on safety and containment.

On all these fronts, Washington and Beijing should aim to create areas of commonality and even guardrails proposed and policed by a third party. Here, the monitoring and verification approaches often found in arms control regimes might be applied to the most important AI inputs, specifically those related to computing hardware, including advanced semiconductors and data centers. Regulating key chokepoints helped contain a dangerous arms race during the Cold War, and it could help contain a potentially even more dangerous AI race now.

Few powerful constituencies favor containing AI.

But since much of AI is already decentralized, it is a problem of the global commons rather than the preserve of two superpowers. The devolved nature of AI development and core characteristics of the technology, such as open-source proliferation, increase the likelihood that it will be weaponized by cybercriminals, state-sponsored actors, and lone wolves. That is why the world needs a third AI governance regime that can react when dangerous disruptions occur. For models, policymakers might look to the approach financial authorities have used to maintain global financial stability. The Financial Stability Board, composed of central bankers, ministries of finance, and supervisory and regulatory authorities from around the world, works to prevent global financial instability by assessing systemic vulnerabilities and coordinating the necessary actions to address them among national and international authorities. A similarly technocratic body for AI risk—call it the Geotechnology Stability Board—could work to maintain geopolitical stability amid rapid AI-driven change. Supported by national regulatory authorities and international standard-setting bodies, it would pool expertise and resources to preempt or respond to AI-related crises, reducing the risk of contagion. But it would also engage directly with the private sector, recognizing that key multinational technology actors play a critical role in maintaining geopolitical stability, just as systemically important banks do in maintaining financial stability.

Such a body, with authority rooted in government support, would be well positioned to prevent global tech players from engaging in regulatory arbitrage or hiding behind corporate domiciles. Recognizing that some technology companies are systemically important does not mean stifling start-ups or emerging innovators. On the contrary, creating a single, direct line from a global governance body to these tech behemoths would enhance the effectiveness of regulatory enforcement and crisis management—both of which benefit the whole ecosystem.

A regime designed to maintain geotechnological stability would also fill a dangerous void in the current regulatory landscape: responsibility for governing open-source AI. Some level of online censorship will be necessary. If someone uploads an extremely dangerous model, this body must have the clear authority—and ability—to take it down or direct national authorities to do so. This is another area for potential bilateral cooperation. China and the United States should want to work together to embed safety constraints in open-source software—for example, by limiting the extent to which models can instruct users on how to develop chemical or biological weapons or create pandemic pathogens. In addition, there may be room for Beijing and Washington to cooperate on global antiproliferation efforts, including through the use of interventionist cybertools.

Each of these regimes would have to operate universally, enjoying the buy-in of all major AI players. The regimes would need to be specialized enough to cope with real AI systems and dynamic enough to keep updating their knowledge of AI as it evolves. Working together, these institutions could take a decisive step toward technoprudential management of the emerging AI world. But they are by no means the only institutions that will be needed. Other regulatory mechanisms, such as “know your customer” transparency standards, licensing requirements, safety testing protocols, and product registration and approval processes, will need to be applied to AI in the next few years. The key across all these ideas will be to create flexible, multifaceted governance institutions that are not constrained by tradition or lack of imagination—after all, technologists will not be constrained by those things.

PROMOTE THE BEST, PREVENT THE WORST

None of these solutions will be easy to implement. Despite all the buzz and chatter coming from world leaders about the need to regulate AI, there is still a lack of political will to do so. Right now, few powerful constituencies favor containing AI—and all incentives point toward continued inaction. But designed well, an AI governance regime of the kind described here could suit all interested parties, enshrining principles and structures that promote the best in AI while preventing the worst. The alternative—uncontained AI—would not just pose unacceptable risks to global stability; it would also be bad for business and run counter to every country’s national interest. 

A strong AI governance regime would both mitigate the societal risks posed by AI and ease tensions between China and the United States by reducing the extent to which AI is an arena—and a tool—of geopolitical competition. And such a regime would achieve something even more profound and long-lasting: it would establish a model for how to address other disruptive, emerging technologies. AI may be a unique catalyst for change, but it is by no means the last disruptive technology humanity will face. Quantum computing, biotechnology, nanotechnology, and robotics also have the potential to fundamentally reshape the world. Successfully governing AI will help the world successfully govern those technologies as well. 

The twenty-first century will throw up few challenges as daunting or opportunities as promising as those presented by AI. In the last century, policymakers began to build a global governance architecture that, they hoped, would be equal to the tasks of the age. Now, they must build a new governance architecture to contain and harness the most formidable, and potentially defining, force of this era. The year 2035 is just around the corner. There is no time to waste.

Article link: https://www.foreignaffairs.com/world/artificial-intelligence-power-paradox

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...