healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

How a Cloud Flaw Gave Chinese Spies a Key to Microsoft’s Kingdom – Wired

Posted by timmreardon on 07/14/2023
Posted in: Uncategorized.

Microsoft says hackers somehow stole a cryptographic key, perhaps no from its own network, that let them forge user identities and slip past cloud defenses.

FOR MOST IT professionals, the move to the cloud has been a godsend. Instead of protecting your data yourself, let the security experts at Google or Microsoft protect it instead. But when a single stolen key can let hackers access cloud data from dozens of organizations, that trade-off starts to sound far more risky.

Late Tuesday evening, Microsoft revealed that a China-based hacker group, dubbed Storm-0558, had done exactly that. The group, which is focused on espionage against Western European governments, had accessed the cloud-based Outlook email systems of 25 organizations, including multiple government agencies.

Those targets encompass US government agencies including the State Department, according to CNN, though US officials are still working to determine the full scope and fallout of the breaches. An advisory from the US Cybersecurity and Infrastructure Security Agencysays the breach, which was detected in mid-June by a US government agency, stole unclassified email data “from a small number of accounts.”

China has been relentlessly hacking Western networks for decades. But this latest attack uses a unique trick: Microsoft says hackers stole a cryptographic key that let them generate their own authentication “tokens”—strings of information meant to prove a user’s identity—giving them free rein across dozens of Microsoft customer accounts.

“We put trust in passports, and someone stole a passport-printing machine,” says Jake Williams, a former NSA hacker who now teaches at the Institute for Applied Network Security in Boston. “For a shop as large as Microsoft, with that many customers impacted—or who could have been impacted by this—it’s unprecedented.”

In web-based cloud systems, users’ browsers connect to a remote server and, when they enter credentials like a username and password, they’re given a bit of data, known as a token, from that server. The token serves as a kind of temporary identity card that lets users come and go as they please within a cloud environment while only occasionally reentering their credentials. To ensure that the token can’t be spoofed, it’s cryptographically signed with a unique string of data known as a certificate or key that the cloud service possesses, a kind of unforgeable stamp of authenticity.

Microsoft, in its blog postrevealing the Chinese Outlook breaches, has described a kind of two-stage breakdown of that authentication system. First, hackers were somehow able to steal a key that Microsoft uses to sign tokens for consumer-grade users of its cloud services. Second, the hackers exploited a bug in Microsoft’s token validation system, which allowed them to sign consumer-grade tokens with the stolen key and then use them to instead access enterprise-grade systems. All of this occurred despite Microsoft’s attempt to check for signatures from different keys for those different grades of token.

Microsoft says it has now blocked all tokens that were signed with the stolen key and replaced the key with a new one, preventing the hackers from accessing victims’ systems. The company adds that it has also worked to improve the security of its “key management systems” since the theft occurred.

But exactly how such a sensitive key, allowing such broad access, could be stolen in the first place remains unknown. WIRED contacted Microsoft, but the company declined to comment further.

In the absence of more details from Microsoft, one theory of how the theft occurred is that the token-signing key wasn’t in fact stolen from Microsoft at all, according to Tal Skverer, who leads research at the security Astrix, which earlier this year uncovered a token security issue in Google’s cloud. In older setups of Outlook, the service is hosted and managed on a server owned by the customer rather than in Microsoft’s cloud. That might have allowed the hackers to steal the key from one of these “on-premises” setups on a customer’s network.

Then, Skverer suggests, hackers might have been able to exploit the bug that allowed the key to sign enterprise tokens to gain access to an Outlook cloud instance shared by all the 25 organizations hit by the attack. “My best guess is that they started from a single server that belonged to one of these organizations,” says Skverer, “and made the jump to the cloud by abusing this validation error, and then they got access to more organizations that are sharing the same cloud Outlook instance.”

But that theory doesn’t explain why an on-premises server for a Microsoft service inside an enterprise network would be using a key that Microsoft describes as intended for signing consumer account tokens. It also doesn’t explain why so many organizations, including US government agencies, would all be sharing one Outlook cloud instance.

Another theory, and a far more troubling one, is that the token-signing key used by the hackers was stolen from Microsoft’s own network, obtained by tricking the company into issuing a new key to the hackers, or even somehow reproduced by exploiting mistakes in the cryptographic process that created it. In combination with the token validation bug Microsoft describes, that may mean it could have been used to sign tokens for any Outlook cloud account, consumer or enterprise—a skeleton key for a large swath, or even all, of Microsoft’s cloud.

The well-known web security researcher Robert “RSnake” Hansen says he read the line in Microsoft’s post about improving the security of “key management systems” to suggest that Microsoft’s “certificate authority”—its own system for generating the keys for cryptographically signing tokens—was somehow hacked by the Chinese spies. “It’s very likely there was either a flaw in the infrastructure or configuration of Microsoft’s certificate authority that led an existing certificate to be compromised or a new certificate to be created,” Hansen says.

If the hackers did in fact steal a signing key that could be used to forge tokens broadly across consumer accounts—and, thanks to Microsoft’s token validation issue, on enterprise accounts, too—the number of victims could be far greater than 25 organizations Microsoft has publicly accounted for, warns Williams.

To identify enterprise victims, Microsoft could look for which of their tokens had been signed with a consumer-grade key. But that key could have been used to generate consumer-grade tokens, too, which might be far harder to spot given that the tokens might have been signed with the expected key. “On the consumer side, how would you know?” Williams asks. “Microsoft hasn’t discussed that, and I think there’s a lot more transparency that we should expect.”

Microsoft’s latest Chinese spying revelation isn’t the first time state-sponsored hackers have exploited tokens to breach targets or spread their access. The Russian hackers who carried out the notorious Solar Winds supply chain attack also stole Microsoft Outlook tokens from victims’ machines that could be used elsewhere on the network to maintain and expand their reach into sensitive systems.

For IT administrators, those incidents—and particularly this latest one—suggest some of the real-world trade-offs of migrating to the cloud. Microsoft, and most of the cybersecurity industry, has for years recommended the move to cloud-based systems to put security in the hands of techgiants rather than smaller companies. But centralized systems can have their own vulnerabilities—with potentially massive consequences.

“You’re handing over the keys to the kingdom to Microsoft,” says Williams. “If your organization is not comfortable with that now, you don’t have good options.”

Andy Greenberg is a senior writer for WIRED, covering hacking, cybersecurity and surveillance. He’s the author of the new book Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency. His last book was *[Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most… Read more

Article link: https://www.wired.com/story/microsoft-cloud-attack-china-hackers/

FACT SHEET: Biden-⁠Harris Administration Publishes the National Cybersecurity Strategy Implementation Plan

Posted by timmreardon on 07/14/2023
Posted in: Uncategorized.

Read the full Implementation Plan here

President Biden has made clear that all Americans deserve the full benefits and potential of our digital future. The Biden-Harris Administration’s recently released National Cybersecurity Strategy calls for two fundamental shifts in how the United States allocates roles, responsibilities, and resources in cyberspace:

  1. Ensuring that the biggest, most capable, and best-positioned entities – in the public and private sectors – assume a greater share of the burden for mitigating cyber risk
  2. Increasing incentives to favor long-term investments into cybersecurity

Today, the Administration is announcing a roadmap to realize this bold, affirmative vision. It is taking the novel step of publishing the National Cybersecurity Strategy Implementation Plan (NCSIP) to ensure transparency and a continued path for coordination. This plan details more than 65 high-impact Federal initiatives, from protecting American jobs by combatting cybercrimes to building a skilled cyber workforce equipped to excel in our increasingly digital economy. The NCSIP, along with the Bipartisan Infrastructure Law, CHIPS and Science Act, Inflation Reduction Act, and other major Administration initiatives, will protect our investments in rebuilding America’s infrastructure, developing our clean energy sector, and re-shoring America’s technology and manufacturing base.

Each NCSIP initiative is assigned to a responsible agency and has a timeline for completion. Some initiatives, such as the issuance of the Administration’s Cybersecurity Priorities for the Fiscal Year 2025 Budget, have been completed ahead of schedule. Other completed activities, such as the transmittal of the May 26th Department of Defense 2023 Cyber Strategy to Congress, and the June 20th creation of a new National Security Cyber Section by the Justice Department, are key milestones in completing initiatives. This is the first iteration of the plan, which is a living document that will be updated annually.

Eighteen agencies are leading initiatives in this whole-of-government plan demonstrating the Administration’s deep commitment to a more resilient, equitable, and defensible cyberspace. The Office of the National Cyber Director (ONCD) will coordinate activities under the plan, including an annual report to the President and Congress on the status of implementation, and partner with the Office of Management and Budget (OMB) to ensure funding proposals in the President’s Budget Request are aligned with NCSIP initiatives. The Administration looks forward to implementing this plan in continued collaboration with the private sector, civil society, international partners, Congress, and state, local, Tribal, and territorial governments. As an example of the Administration’s commitment to public-private collaboration, ONCD is also working on a request for information regarding cybersecurity regulatory harmonization that will be published in the near future. 

The NCSIP is not intended to capture all Federal agency activities in support of the NCS. The following are sample initiatives from the plan, which is organized by the NCS pillars and strategic objectives.

Pillar One | Defending Critical Infrastructure

  • Update the National Cyber Incident Response Plan (1.4.1): During a cyber incident, it is critical that the government acts in a coordinated manner and that private sector and SLTT partners know how to get help. The Cybersecurity and Infrastructure Security Agency (CISA) will lead a process to update the National Cyber Incident Response Plan to more fully realize the policy that “a call to one is a call to all.” The update will also include clear guidance to external partners on the roles and capabilities of Federal agencies in incident response and recovery.

Pillar Two | Disrupting and Dismantling Threat Actors

  • Combat Ransomware (2.5.2 and 2.5.4): Through the Joint Ransomware Task Force, which is co-chaired by CISA and the FBI, the Administration will continue its campaign to combat the scourge of ransomware and other cybercrime. The FBI will work with Federal, international, and private sector partners to carry out disruption operations against the ransomware ecosystem, including virtual asset providers that enable laundering of ransomware proceeds and web fora offering initial access credentials or other material support for ransomware activities. A complementary initiative, led by CISA, will include offering resources such as training, cybersecurity services, technical assessments, pre-attack planning, and incident response to high-risk targets of ransomware, like hospitals and schools, to make them less likely to be affected and to reduce the scale and duration of impacts if they are attacked.

Pillar Three | Shaping Market Forces and Driving Security and Resilience

  • Software Bill of Materials (3.3.2): Increasing software transparency allows market actors to better understand their supply chain risk and to hold their vendors accountable for secure development practices. CISA continues to lead work with key stakeholders to identify and reduce gaps in software bill of materials (SBOM) scale and implementation. CISA will also explore requirements for a globally-accessible database for end of life/end of support software and convene an international staff-level working group on SBOM.

Pillar Four | Investing in a Resilient Future

  • Drive Key Cybersecurity Standards (4.1.3, 4.3.3): Technical standards are foundational to the Internet, and U.S. leadership in this area is essential to the vibrancy and security of cyberspace. Consistent with the National Standards Strategy, the National Institute of Standards and Technology (NIST) will convene the Interagency International Cybersecurity Standardization Working Group to coordinate major issues in international cybersecurity standardization and enhance U.S. federal agency participation in the process. NIST will also finish standardization of one or more quantum-resistant publickey cryptographic algorithms.

Pillar Five | Forging International Partnerships to Pursue Shared Goals

  • International Cyberspace and Digital Policy Strategy (5.1.1 and 5.1.2): Cyberspace is inherently global, and policy solutions must reflect close collaboration with our partners and allies. The Department of State will publish an International Cyberspace and Digital Policy Strategy that incorporates bilateral and multilateral activities. State will also work to catalyze the development of staff knowledge and skills related to cyberspace and digital policy that can be used to establish and strengthen country and regional interagency cyber teams to facilitate coordination with partner nations.

Article link: https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/13/fact-sheet-biden-harrisadministration-publishes-thenational-cybersecurity-strategyimplementation-plan/

National Cyber Director Lays out 69 Steps to Implement National Plan –

Posted by timmreardon on 07/14/2023
Posted in: Uncategorized.

The White House’s Office of the National Cyber Director (NCD) today released its much-awaited marching orders to implement the National Cybersecurity Strategy (NCS) that it published in March.

On the Federal agency front, the implementation plan enlists the additional efforts of several agencies that already are doing some of the heavy lifting on many aspects of cybersecurity work and policy.

Similarly, key issues that will get more attention include some that are already well-known in policy circles – including creation of software bills of material, fighting ransomware and other cybercrime, improving incident response work, and pushing harder for international cybersecurity harmonization.

And high atop the initiatives featured in the plan released today, the NCD said it is preparing a request for information on “cybersecurity regulatory harmonization” for critical infrastructure that it plans to publish “in the near future.”

When it rolled out the NCS earlier this year, the National Cyber Director keyed on multiple focus points – including continuing efforts to improve security in already-regulated critical infrastructure sectors, a high-level goal of shifting more security responsibility onto providers of tech products and services, and a robust focus on using “all tools of national power” to go after attackers.

In the implementation plan released today, the NCD shaped its focus around two goals: “Ensuring that the biggest, most capable, and best-positioned entities – in the public and private sectors – assume a greater share of the burden for mitigating cyber risk,” and “increasing incentives to favor long-term investments into cybersecurity.”

Orders to Agencies

The implementation plan features “more than 65 high-impact initiatives requiring executive visibility and interagency coordination that the Federal government will carry out to achieve the strategy’s objectives,” NCD said.

Each of the 69 initiatives has been tasked to a Federal agency, with a timeline for completion, NCD said.

Taking on many of those initiatives will be 18 Federal agencies, NCD said. Agencies making prominent showings on NCD’s initiatives list include the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, Justice Department, State Department, and the Commerce Department’s National Institute of Standards and Technology (NIST).

NCD added that while the implementation plan represents a “whole of government” approach, the entire effort also will encompass Congress, state, local, Tribal, and territorial governments, the private sector, civil society, and international partners.

NCD said it will coordinate activities under the plan, and work with the Office of Management and Budget (OMB) “to ensure funding proposals in the President’s Budget Request are aligned” with the implementation plan.

NCD also pledged today that the implementation plan will be a “living document,” and will be updated annually.

Steps Already Taken 

Some of the 69 tasks on the list have already been completed – including the White House’s release in June of its fiscal year 2025 cybersecurity priorities, the Defense Department’s delivery to Congress in May of its 2023 cyber strategy, and the creation in June of the Justice Department’s National Security Cyber Division.

Big To-Do List 

Among the 65 initiatives listed in the plan, NCD highlighted some of the major steps in a fact sheet released by the White House today:

  • CISA will lead an effort to update the National Cyber Incident Response Plan “to more fully realize the policy that ‘a call to one is a call to all.’” NCD said, and to offer “clear guidance to external partners on the roles and capabilities of Federal agencies in incident response and recovery.”
  • CISA and the FBI through the existing Joint Ransomware Task Force will lead efforts to combat ransomware and other cybercrime. The FBI will work with Federal, international, and private sector partners to “carry out disruption operations against the ransomware ecosystem, including virtual asset providers that enable laundering of ransomware proceeds and web fora offering initial access credentials or other material support for ransomware activities.” At the same time, CISA will lead efforts to offer resources “such as training, cybersecurity services, technical assessments, pre-attack planning, and incident response to high-risk targets of ransomware, like hospitals and schools, to make them less likely to be affected and to reduce the scale and duration of impacts if they are attacked,” NCD said.
  • CISA will continue to lead work with stakeholders on identifying and reducing “gaps in software bill of materials (SBOM) scale and implementation,” and also will “explore requirements for a globally-accessible database for end of life/end of support software and convene an international staff-level working group on SBOM.”
  • NIST will “convene the Interagency International Cybersecurity Standardization Working Group to coordinate major issues in international cybersecurity standardization and enhance U.S. federal agency participation in the process,” and finish standardization of one or more quantum-resistant public key cryptographic algorithms, NCD said. “Technical standards are foundational to the Internet, and U.S. leadership in this area is essential to the vibrancy and security of cyberspace,” it said.
  • The State Department will create an International Cyberspace and Digital Policy Strategy “that incorporates bilateral and multilateral activities,” and will also work to “catalyze the development of staff knowledge and skills related to cyberspace and digital policy that can be used to establish and strengthen country and regional interagency cyber teams to facilitate coordination with partner nations.” Cyberspace, said NCD, “is inherently global, and policy solutions must reflect close collaboration with our partners and allies.”

The full scope of directives to agencies are listed in the plan released today.

Article link: https://www.meritalk.com/articles/national-cyber-director-lays-out-69-steps-to-implement-national-plan/

VA secretary: Automation will help agency ‘make better, faster, more informed decisions’ – Nextgov

Posted by timmreardon on 07/13/2023
Posted in: Uncategorized.

By EDWARD GRAHAM June 29, 2023

VA Secretary Denis McDonough said the department is poised “to deliver a completely new, fully integrated experience” for veterans as officials expect a further increase in PACT Act claims.

The Department of Veterans Affairs has launched numerous modernization efforts in recent years to enhance its veteran-facing services, but more must be done to meet the coming deluge of PACT Act-related benefits claims — including using more artificial intelligence tools to bolster existing processes — VA Secretary Denis McDonough said at the DigitalVA Expo Thursday. 

The PACT Act, which was signed into law by President Joe Biden last August, increased the number of veterans and their family members eligible to receive benefits as a result of exposure to burn pits and other toxic substances. A VA official said during a media roundtable last month that over 560,000 disability benefits claims had already been filed since the law’s enactment.

While McDonough noted the department’s progress in modernizing and enhancing many of its digital and telehealth services, he said conducting a simple task — such as having a veteran update their personal information — “in too many cases still requires visits to multiple outdated websites.”

As the VA experienced a surge in telehealth visits as a result of the coronavirus pandemic, the department worked to streamline many of its disparate claims, benefits and health services into a centralized mobile app that launched in July 2021.

With VA’s backlog of benefits claims expected to peak later this year as a result of the PACT Act, McDonough said the department must continue “building on the innovation that our providers have led” over the past several years, since “we don’t have a choice.”

He said this includes prioritizing technologies that will allow the department to speed up its claims processing timeframe while also reducing system outages and downtimes — efforts that have been bolstered by VA partnering with private sector companies “to leverage best practices in AI, natural language processing, machine learning [and] robotic process automation technology.

“That means using automation to help us make better, faster, more informed decisions, improving veteran health outcomes and benefits decisions, while eliminating redundant administrative tasks and workflows and promoting jointness across VA — including with our federal and community partners,” McDonough added. 

A report released by the VA Office of Inspector General earlier this month noted that the Veterans Benefits Administration, which oversees veterans’ benefits programs within the department, plans to use IT system modernization tools and automation to help meet the influx of PACT Act claims. 

The report’s release came after lawmakers on two House Veterans’ Affairs subcommittees also called earlier this month for the department to speed up its deployment of advanced automation tools to help reduce menial tasks that could prolong the processing of veterans’ claims. 

McDonough said VA is “early in this journey” when it comes to using more advanced technologies and automation, but that the department is poised “to deliver a completely new, fully integrated experience in which the support vets need is just a few clicks away.

“These emerging technologies promise to serve more veterans more efficiently, more effectively, more equitably and more accurately,” he added.

Article link: https://www.nextgov.com/modernization/2023/06/va-secretary-automation-will-help-agency-make-better-faster-more-informed-decisions/388117/

From McKinsey: Lou Gerstner on corporate reinvention and values

Posted by timmreardon on 07/12/2023
Posted in: Uncategorized.

Lou Gerstner on corporate reinvention and values

The former IBM CEO offers his thoughts on the principles and strategies that sustain a company in the long run.

September 1, 2014 | Interview

The former IBM CEO offers his thoughts on the principles and strategies that sustain a company in the long run.

DOWNLOADS

Article (PDF-2 MB)

Lou Gerstner will always be known as the man who saved IBM after resuscitating, then reinvigorating, the near bankrupt company when he took over as chairman and CEO in 1993. Gerstner’s career, though, spanned 43 years which also included more than a decade at McKinsey, senior positions at American Express, and four years as chairman and CEO of RJR Nabisco. Since stepping down from IBM in 2002 he has continued to lead an active “portfolio” life in education, healthcare, and private equity. In this conversation with former McKinsey managing director Ian Davis, he reflects on the DNA of companies that keep on creating value.Share

Sidebar

Lou Gerstner biography

Lou Gerstner

Vital statistics

Born March 1, 1942, in Mineola, New York

Education

Graduated with a bachelor’s degree in engineering from Dartmouth College in 1963 and an MBA from Harvard Business School in 1965

Career highlights

The Carlyle Group(2003–present)

  • Senior adviser (2008–present)
  • Chairman (2003–08)

IBM (1993–2002)

  • Chairman and CEO

RJR Nabisco(1989–93)

  • Chairman and CEO

American Express (1978–89)

  • Various executive positions

McKinsey & Company (1965–78)

  • Director

Fast facts

Awarded the designation of honorary Knight of the British Empire from Queen Elizabeth II, in June 2001, for his work in education and business

Chairman of the board of directors at the Broad Institute of MIT and Harvard 

Vice chairman of Memorial Sloan Kettering Cancer Center’s boards of overseers and managers

Received the American Museum of Natural History’s Distinguished Service to Science and Education Award 

The Quarterly: How do you think about corporate longevity? Does it help executives if their companies explicitly aim to be around a long time, by which I mean a generation or more?

Lou Gerstner: I don’t think so. It seems to me that companies should focus on trying to be successful five years from now, perhaps ten. If your business has already been around, say, 20 years, I don’t see how it can help the management team if one of the primary objectives is getting to 100. It’s not something they can execute on. I’m not sure what you can do to guarantee success in that time frame, or even on a 20- to 30-year view.

The Quarterly: So why do some businesses last much longer than others?

Lou Gerstner: A lot of it has to do with the industry. Many companies that have made it over many years have been in slow-changing industries that haven’t been much affected by the external environment, that are characterized by significant scale economies, or that are heavily regulated.

Take food production. The big global players in this sector are not, typically, huge profit generators, and their turnover only increases modestly—say, by 1 to 2 percent a year, in line with demographic trends. But those businesses are in a nice place: there’s not much new competition, and the changes they’re up against, whether technological or otherwise, tend to be relatively small. In the automobile industry, it’s long cycle times and scale economies that deter others. And in banking, it’s been regulation. You see a lot of small bank start-ups in the US but the reason that so many of the large players have been around a long time is that state and federal laws make it difficult to start a national bank.

The Quarterly: Conversely, the entry and exit barriers are much lower in, say, software or technology, where capital requirements for new entrants can be relatively light.

Lou Gerstner: That’s true, and it’s in those sectors that companies are most often subject to strong competition, technological innovation, and regulatory change. The question, at the end of the day, is whether leaders in these and other industries can adjust. I would argue that more often than not they can’t. Think about all those companies in the computer or consumer-electronics industries, like Control Data or RCA. Corporate longevity is either driven by the leadership team that is there or by a new one that comes in from the outside and is able to manage the transition to a significantly different competitive environment. There was nothing that said American Express or IBM couldn’t go out of business, and IBM very nearly did. For a long time, American Express wouldn’t go into credit cards, because it thought that would cannibalize its Travelers Cheques business. When I arrived at IBM in 1993, there was no inheritable or even extendable platform. The company was dying.

The Quarterly: Is there something in the DNA of those firms that have endured—perhaps a willingness to respond to a change of direction—that enables them to survive?

Lou Gerstner: In anything other than a protected industry, longevity is the capacity to change, not to stay with what you’ve got. Too many companies build up an internal commitment to their existing businesses, and there’s the problem: it’s very, very difficult to “eat your seed corn,” go into other activities, or radically change something fundamental about what you’ve been doing, like the pricing structure or distribution system. Rather than changing, they find it easier to just keep doing the same things that brought them success. They codify why they’re successful. They write guidebooks. They create teaching manuals. They create whole cultures around sustaining the model. That’s great until the model gets threatened by external change; then, all too often, the adjustment is discontinuous. It requires a wrench, often from an outside force. Andy Grove put it well when he said “only the paranoid survive.”

Remember that the enduring companies we see are not really companies that have lasted for 100 years. They’ve changed 25 times or 5 times or 4 times over that 100 years, and they aren’t the same companies as they were. If they hadn’t changed, they wouldn’t have survived. If you could take a snapshot of the values and processes of most companies 50 years ago—and did the same with a surviving company in 2014—you would say it’s a different company other than, perhaps, its name and maybe its purpose and maybe its industry. The leadership that really counts is the leadership that keeps a company changing in an incremental, continuous fashion. It’s constantly focusing on the outside, on what’s going on in the marketplace, what’s changing there, noticing what competitors are doing.

The Quarterly: How important are values in sustaining companies, even those that change? And can values be an enemy of change?

Lou Gerstner: I think values are really, really important, but I also think that too many values are just words. When I teach at the IBM School, I use the annual reports of about ten major companies that invariably announce, on the back page or inside back page, “These are our values.” What’s striking to me is that almost all the values are the same. “We focus on our customers; we value teamwork; we respect the dignity of our workforce.”

But when you go inside those companies, you often see that the words don’t translate into practices. When I arrived at IBM, one of my first questions was, “Do we have teamwork?,” because the new strategy crucially depended on our ability to provide an integrated approach to our customers. “Oh, yes, Lou, we have teamwork,” I was told. “Look at those banners up there. Mr. Watson put them up in 1938; they’re still there. Teamwork!” “Oh, good,” I responded. “How do we pay people?” “Oh, we pay on individual performance.” The rewards system is a powerful driver of behavior and therefore culture. Teamwork is hard to cultivate in a world where employees are paid solely on their individual performance.

I found a similar problem at American Express, where our stated distinguishing capability was the quality of the service we delivered versus that of our competitors Visa and MasterCard, which were owned by a diverse group of bank holding companies. It turned out that on a quarterly basis, we only measured financial performance and that the assessment of our service quality, on crucial customer-satisfaction matters such as statement clarity or phone-call wait times, was only done once a year. People do what you inspect—not what you expect.

If the practices and processes inside a company don’t drive the execution of values, then people don’t get it. The question is, do you create a culture of behavior and action that really demonstrates those values and a reward system for those who adhere to them? At American Express, we had an annual award for people, all over the world, who delivered great service. One winner I’ll never forget was a young chauffeur whose car windscreen had smashed and hit him in the head while he was driving an American Express client to the airport. Bleeding profusely, he continued the journey and got the client to the plane on time. By explicitly recognizing through worldwide communications the incredible commitment of people like this (and the rewards they receive), you can get people to behave in a certain way. Simply talking about it as part of your values isn’t enough.

The Quarterly: Some companies with reputedly strong values still find it hard to change. Do values ever get in the way of the adjustment you are talking about?

Lou Gerstner: I find it hard to think about bad values per se. The problem, as I say, comes when values are simply ignored and not reinforced every day by the internal processes of the company. The fault lies in not demanding adherence to the important values: sensitivity to the marketplace, awareness of competitors, and a willingness to deliver to the customer whatever he or she wants, regardless of what your internal historical assets have been.

In that sort of situation, it’s very hard to change. IBM was enamored with mainframes because mainframes made all the money. But if we were going to change, we had to find a way to take the money away from mainframes and allocate it to something else. So it isn’t what companies say; it’s what they do. Do you think Eastman Kodak didn’t see the move from analog to digital photography? Of course they did. They invented it. But if they had a value—I’m sure they did—of being market sensitive and following the customer, they didn’t follow it. They didn’t make the shift.

The Quarterly: Are there any relevant lessons from your post-IBM experience in the private-equity industry?

Lou Gerstner: I think that private-equity activity tends to come at the end of the corporate cycle, when a company is already in trouble, has been mismanaged, or is an orphan in need of new leadership. So private equity is another outside agent that comes in when management has failed to do what it needs to do.

The Quarterly: Is the management of generational change within a company an important component of adaptability and staying sensitive to the market? Does involving younger people meaningfully in routine decisions help create the right conditions for change?

Lou Gerstner: The problem with all of these things is that there’s a ditch on both sides of the road. I’ve known times in my career when older and wiser heads restrained younger people carried away by short-term dollar signs. So it’s hard to generalize, but certainly you have to listen to all the executive team. Organizations, in my experience, tend to be healthiest where there is a supremacy of ideas, where people are willing to listen to the youngest person in the room—provided, of course, that he or she has the facts.

My successor at IBM has embraced what we called an IBM jam. It goes on for several days; every IBMer could dial in and discuss important topics like cloud computing or mobile computing. That represented a real effort at IBM to tap the ideas of the younger, newer employees, not just the senior executives. Always listening to the younger folks won’t guarantee you the best strategy. But if you don’t listen to them at all, you won’t get it either.

The Quarterly: Do you think ownership structure makes a difference? We’ve noticed that a large proportion of enduring companies have been privately owned.

Lou Gerstner: There are obviously many more private companies than public companies, certainly in the United States, so you would probably expect this outcome. One thing I would say, though, is that the preoccupation with short-term earnings in the public-company environment—not something private companies are so concerned with—is quite destructive of longevity.

And that’s a bad thing. Who says the analysts are right when they mark down a company’s stock just because it makes 89 cents in the first quarter rather than the 93 forecast by the market? Are they thinking about the long-term competitiveness of the company? Are they thinking this would have been a good time to reinvest, or are they just churning out numbers and saying they want earnings per share to go up every quarter? This kind of short-term pressure on current earnings can lead to underinvestment in the long-term competitiveness of a business.

It’s very interesting to me that a company like Amazon has been able to convince the world that it doesn’t have to make meaningful earnings, because it’s investing for the future—building warehouses and building distribution and building hardware and software applications. It’s so rare to see that happening. It’s like they’re acting like a private company. It could be that private companies can operate without the pressure of trade-offs of short- and long-term investment and performance.

ABOUT THE AUTHOR(S)

This interview was conducted by Ian Davis, former managing director of McKinsey, and McKinsey Publishing’s Tim Dickson.

Article link: https://www.mckinsey.com/featured-insights/leadership/lou-gerstner-on-corporate-reinvention-and-values?cid=app

Inside NIST’s effort to lay the groundwork for a functional quantum computer – Nextgov

Posted by timmreardon on 07/10/2023
Posted in: Uncategorized.

By ALEXANDRA KELLEYJULY 7, 2023

The latest quantum computer-adjacent device is part of NIST’s broader plan to help agencies move forward as they develop more emerging technology infrastructure.

Researchers affiliated with the National Institute of Standards and Technology unveiled a new device last week that aims to cut down the “noise” generated by quantum computing systems’ information processing, part of a larger agency agenda to meet private sector innovation in the quantum information science and technology field. 

The device, a programmable switch that can improve connectivity between two qubits, or quantum units of information, was initially introduced in a research paper. One of the authors, NIST physicist Ray Simmonds, told Nextgov/FCW that with these devices, NIST aims to enable future advances in quantum computing infrastructure. 

“NIST is trying to…say, ‘what do we need to do to make that technology happen?’” he said. “Right now, Google and IBM have been working on scaling something up, and they’re using a technology that can work on a larger scale…but it’s not going to have all the functionality you need necessarily to make a robust machine at this point. There’s other things that have to get sorted.”

Making a strong switch between these qubits to help them perform calculations and other operations more accurately is one of those external components key to emerging QIST architectures. 

“What we’ve been doing is to try to push the envelope and say, ‘Can we make a system where all these individual elements can be turned on and off?’” he said.

As quantum computing is still an emerging technology field — where universally operable quantum computers are an estimated decade away — experts are uncertain as to the exact system architecture it will require. 

Devices like NIST’s switch intend to promote a level of flexibility in operating a future quantum computer, especially as the field continues to evolve. 

“We feel like this is the kind of thing that’s going to need to happen if you want to make a machine that’s more flexible, and to push to the next level to make it have more connections and have more ways of of operating the device,” Simmonds said. ”We don’t know exactly what is the best architecture to make a quantum machine yet.” 

The toggle tackles a fundamental element required for precise quantum computing — noise reduction between qubits — but also can turn on the measurement of both qubits simultaneously, a feature that can help reduce quantum computational errors. 

“One of NIST’s missions is to not only figure out how to measure things better but more efficiently and with higher fidelity; and making those measurements better but figuring out what are the next measurements that need to happen to enable technologies,” Simmonds said. 

He added that as technologies continue to advance, specific measurements and standards applicable to them will change. In this vein, NIST is “really trying to push the whole U.S. forward and help bring those new ideas to bear,” he said.

Article link: https://www.nextgov.com/emerging-tech/2023/07/inside-nists-effort-lay-groundwork-functional-quantum-computer/388316/

Exposed Interfaces in US Federal Networks: A Breach Waiting to Happen – HACKRead

Posted by timmreardon on 07/07/2023
Posted in: Uncategorized.

The research mainly aimed at examining VPNs, firewalls, access points, routers, and other remote server management appliances used by top government agencies in the United States.

BYWAQAS

JUNE 28, 2023

Cybersecurity researchers at Censys referred to publicly-accessible exposed interfaces as “low-hanging fruit” for cybercriminals, as they can easily gain unauthorized access to crucial assets.

Researchers at Censys, an attack surface management company, have discovered hundreds of devices linked to federal networks that have remotely accessible management interfaces. These interfaces can allow for the controlling and configuring of federal agency networks through the public internet.

Shocking Details Emerge about Federal Network Devices

According to a blog post from the Censys Research Team, published on June 26, an examination of the attack surfaces of approximately fifty sub-organizations within the federal civilian executive branch (FCEB) revealed 13,000 different hosts spread across 100 autonomous systems. 

Further probing uncovered that services running on a subset of 1,300 FCEB hosts, accessible through IPv4 addresses, had hundreds of devices with publicly accessible management interfaces. This revelation falls within the scope of CISA’s BOD 23-02 (Binding Operational Directive).

What is BOD 23-02?

CISA’s BOD 23-02 helps federal agencies eliminate risks associated with remotely accessible management interfaces. It requires federal civilian agencies to remove certain networked management interfaces from the internet and mandates them to implement Zero Trust Architecture capabilities to enforce access control to internet-exposed interfaces within fourteen days of discovery.

What are the Dangers of Internet-Exposed Interfaces?

Researchers at Censys referred to publicly-accessible interfaces as “low-hanging fruit” for cybercriminals, as they can easily gain unauthorized access to crucial assets. CISA notes that threat actors are taking a keen interest in targeting certain classes of devices, especially those supporting network infrastructures, as it helps them evade detection. 

After compromising these devices, attackers can obtain full access to the network. Misconfigurations, insufficient or outdated security measures, and unpatched software make devices vulnerable to exploitation. If the device management interface is directly connected to or accessible from a public-facing internet, it will be far more damaging for the organization.

Which Devices Are Impacted?

Researchers mainly examined VPNs, firewalls, access points, routers, and other remote server management appliances. They found around 250 different web interfaces for hosts exposing network appliances, all using SSH and TELNET remote protocols. 

Most of the impacted devices were Cisco network appliances with a publicly accessible Adaptive Security Device Manager interface, whereas they also discovered enterprise Cradlepoint router interfaces revealing wireless network details. Other impacted products include Fortinet FortiGuard, SonicWall, and other popular firewalls.

In addition, researchers observed exposed remote access protocols, including NetBIOS, FTP, SNMP, and SMB, out-of-band remote server management devices like Lantronix SLC console server, physical Barracuda Email Security Gateway appliances, Nessus vulnerability scanning servers, HTTP services that exposed directory listings, managed file transfer protocols such as GoAnywhere, MOVEit, and SolarWinds Serv-U, and over 150 end-of-life software. 

Fifteen of the remote access protocols, which already contain multiple known vulnerabilities exploitable by threat actors, were running on FCEB’s exposed hosts. The report highlights the need for federal agencies to be more proactive in safeguarding their digital assets and improving security mechanisms across all systems to make devices CISA’s BOD 23-02 compliant.

Article link: https://www.hackread.com/exposed-interfaces-us-federal-networks-breach/

The AI Apocalypse: A Scorecard > How worried are top AI experts about the threat posed by large language models like GPT-4? – IEEE Spectrum

Posted by timmreardon on 07/06/2023
Posted in: Uncategorized.

What should we make of OpenAI’s GPT-4, anyway? Is the large language model a major step on the way to an artificial general intelligence (AGI)—the insider’s term for an AI system with a flexible human-level intellect? And if we do create an AGI, might it be so different from human intelligence that it doesn’t see the point of keeping Homo sapiensaround?

If you query the world’s best minds on basic questions like these, you won’t get anything like a consensus. Consider the question of GPT-4’s implications for the creation of an AGI. Among AI specialists, convictions range from Eliezer Yudkowsky’s view that GPT-4 is a clear sign of the imminence of AGI, to Rodney Brooks’s assertion that we’re absolutely no closer to an AGI than we were 30 years ago.

On the topic of the potential of GPT-4 and its successors to wreak civilizational havoc, there’s similar disunity. One of the earliest doomsayers was Nick Bostrom; long before GPT-4, he argued that once an AGI far exceeds our capabilities, it will likely find ways to escape the digital world andmethodically destroy human civilization. On the other end are people like Yann LeCun, who reject such scenarios as sci-fi twaddle.

In between are researchers who worry about the abilities of GPT-4 and future instances of generative AI to cause major disruptions in employment, to exacerbate the biases in today’s society, and to generate propaganda, misinformation, and deep fakery on a massive scale. Worrisome? Yes, extremely so. Apocalyptic? No.

Many worried AI experts signed an open letter in March asking all AI labs to immediately pause “giant AI experiments” for six months. While the letter didn’t succeed in pausing anything, it did catch the attention of the general public, and suddenly made AI safety a water-cooler conversation. Then, at the end of May, an overlapping set of experts—academics and executives—signed a one-sentence statementurging the world to take seriously the risk of “extinction from AI.”

Below, we’ve put together a kind of scorecard. IEEE Spectrum has distilled the published thoughts and pronouncements of 22 AI luminaries on large language models, the likelihood of an AGI, and the risk of civilizational havoc. We scoured news articles, social media feeds, and books to find public statements by these experts, then used our best judgment to summarize their beliefs and to assign them yes/no/maybe positions below. If you’re one of the luminaries and you’re annoyed because we got something wrong about your perspective, please let us know. We’ll fix it.

And if we’ve left out your favorite AI pundit, our apologies. Let us know in the comments section below whom we should have included, and why. And feel free to add your own pronouncements, too.

Profile picture

. 

SAM ALTMAN

CEO, OpenAI

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

LLMs are an important step toward artificial general intelligence, but it will be a “slow takeoff” in which we’ll have time to address the real risks that the technology brings.

Quote

“Future versions of AI will solve some of our most pressing problems, really increase the standard of life, and also figure out much better uses for human will and creativity.”

Photo: OpenAI

Profile picture

JACOB ANDREAS

Professor, MIT

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Large language models such as GPT-4 offer “a massive space of possibilities,” but even the best models today are not reliable or trustworthy, and a lot of work will be required to fix these problems.

Quote

“The thing that I’m most scared about has to do with… truthfulness and coherence issues.”

Photo: Gretchen Ertl/MIT

Profile picture

EMILY M. BENDER

Professor, University of Washington

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Powerful corporations’ hype about LLMs and their progress toward AGI create a narrative that distracts from the need for regulations that protect people from these corporations.

Quote

“We can imagine other futures, but to do so, we have to maintain independence from the narrative being pushed by those who believe that AGI is desirable and that LLMs are the path to it.”

Photo: Emily M. Bender

Profile picture

YOSHUA BENGIO

Professor, University of Montreal

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Maybe

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

We are not prepared for the proliferation of powerful AI tools. The danger will not come from these tools becoming autonomous but rather from military applications and from misuse of the tools that sows disinformation and discrimination.

Quote

“We have passed a critical threshold: Machines can now converse with us and pretend to be human beings. This power can be misused for political purposes at the expense of democracy.”

Photo: Camille Gladu-Drouin

Profile picture

NICK BOSTROM

Professor, University of Oxford

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

Sentience is a matter of degrees, and today’s LLMs could be considered to have some small degree of sentience. Future versions could have more.

Quote

“Variations of these AIs may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.”

Photo: University of Oxford

Profile picture

RODNEY BROOKS

Professor emeritus, MIT, and cofounder, RobustAI

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

The rise of large language models and the accompanying furor is not much different from many previous such upheavals in technology in general and AI in particular.

Quote

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”

Photo: Christopher Michel

Profile picture

SÉBASTIEN BUBECK

Research manager, Microsoft Research

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

GPT-4 has a type of generalized intelligence and exhibits a flexible understanding of concepts. But it lacks certain fundamental building blocks of human intelligence, such as memory and the ability to learn.

Quote

“All of the things I thought [GPT-4] wouldn’t be able to do? It was certainly able to do many of them—if not most of them.”

Photo: Microsoft Research

Profile picture

JOY BUOLAMWINI

Founder, Algorithmic Justice League

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

While LLMs are not likely to lead to a civilization-crushing AGI, people’s fear of that hypothetical can be used to slow down the progress of AI and address pressing near-term concerns.

Quote

“Honest  question: If you believe you are unleashing the end of the world, why continue your current path?”

Photo: Getty Images

Profile picture

TIMNIT GEBR

Founder, Distributed AI Research Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Discussion of AGI is a deliberate distraction from the real and pressing risks of today’s LLMs. The companies responsible need regulations to increase transparency and accountability and to end exploitative labor practices.

Quote

“Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of ‘powerful digital minds.’”

Photo: Timnit Gebru

Profile picture

ALISON GOPNIK

Professor, UC Berkeley

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Human intelligence evolved from our interactions with the natural world. It’s unlikely that LLMs will become intelligent through text alone.

Quote

“What [LLMs] let us do is take all the words, all the text that people have written over all time, and summarize those in a way that is effective and lets us interact. I think what it isn’t is a new kind of intelligence.”

Photo: Gary Doak/Alamy

Profile picture

DAN HENDRYCKS

Director and cofounder, Center for AI Safety

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

Economic and strategic pressures are pushing AI companies to develop powerful AI systems that can’t be reliably controlled. Unless AI safety research becomes a worldwide priority, humanity is in danger of extinction.

Quote

“Whereas AI researchers once spoke of ‘designing’ AIs, they now speak of ‘steering’ them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand.”

Photo: Dan Hendrycks

Profile picture

GEOFFREY HINTON

Professor, University of Toronto

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

LLM technology is advancing too rapidly; it shouldn’t advance any further until scientists are confident that they can control it.

Quote

“Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI. And now I think it may be 20 years or less.”

Photo: University of Toronto

Profile picture

CHRISTOF KOCH

Chief scientist, MindScope Program, Allen Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

GPT-4 and other large language models will exceed human capabilities in some endeavors, and approach AGI capabilities, without human-type understanding or consciousness.

Quote

“What it shows, very clearly, is that there are different routes to intelligence.“

Photo: Allen Institute

Profile picture

JARON LANIER

Computer scientist, entrepreneur, author, artist

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Generative AI and chatbots are an important advance because they offer users choice. By offering a range of responses, these models will erode the illusion of the “monolithic truth” of the Internet and AI.

Quote

“This idea of surpassing human ability is silly because it’s made of human abilities. It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”

Photo: Jaron Lanier

Profile picture

YANN LECUN

Chief AI scientist, Meta

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

GPT-4 itself is dazzling in some ways, but it is not particularly innovative. And GPT-style large language models will never lead to an AI with common sense and real, human-type understanding.

Quote

“The amplification of human intelligence by machine will enable a new renaissance or a new age of enlightenment, propelled by an acceleration of scientific, technical, medical, and social progress thanks to AI.”

Photo: Meta

Profile picture

GARY MARCUS

Professor, NYU

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

Making large language models more powerful is necessary but not sufficient to create an AGI.

Quote

“There is still an immense amount of work to be done in making machines that truly can comprehend and reason about the world around them.”

Photo: NYU

Profile picture

MARGARET MITCHELL

Chief ethics scientist, Hugging Face

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Narratives that present AGI as ushering in either a utopia or extinction both contribute to hype and disguise the concentration of power in a handful of corporations.

Quote

“Ignoring active harms right now is a privilege that some of us don’t have.”

Photo: Hugging Face

Profile picture

MELANIE MITCHELL

Professor, Sante Fe Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Both the abilities and the dangers of LLMs are overhyped. We should be thinking more about LLMs’ immediate risks, such as disinformation and displays of harmful bias.

Quote

“We humans are continually at risk of over-anthropomorphizing and over-trusting these systems, attributing agency to them when none is there.”

Photo: Melanie Mitchell

Profile picture

ANDREW NG

Founder and CEO, Landing AI

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

While the latest LLMs are far from being AGIs, they have some superhuman capabilities that can be harnessed for human advancement.

Quote

“In the past year, I think we’ve made one year of wildly exciting progress in what might be a 50- or 100-year journey.”

Photo: Andrew Ng

Profile picture

MAX TEGMARK

Professor, MIT and president, Future of Life Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

More powerful LLMs and AI systems should be developed only once researchers are confident that their effects will be positive and their risks will be manageable.

Quote

“Our letter mainstreamed pausing; [the statement from the Center for AI Safety] mainstreams extinction. Now a constructive open conversation can finally start.”

Photo: Max Tegmark

Profile picture

MEREDITH WHITTAKER

President, Signal Foundation

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

LLMs are already doing serious damage to society by reflecting all the biases inherent in their training data.

Quote

“Why do we need to create these? What are the collateral consequences of deploying these models in contexts where they’re going to be informing people’s decisions?”

Photo: Signal Foundation

Profile picture

ELIEZER YUDKOWSKY

Cofounder, Machine Intelligence Research Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

LLMs could lead to an AGI, and the first superintelligent AGI will, by default, kill literally everyone on the planet. And we are not on track to solve this problem on the first try.

Quote

“Unaligned operation at a dangerous level of intelligence kills everybody on Earth, and then we don’t get to try again.”

Article link: https://spectrum.ieee.org/artificial-general-intelligence

Unique Approaches to Lovell FHCC Federal EHR Deployment – FEHRM

Posted by timmreardon on 07/05/2023
Posted in: Uncategorized.

July 5, 2023

Captain James A. Lovell Federal Health Care Center (FHCC) will receive the same single, common federal electronic health record (EHR) as other Department of Defense (DOD) and Department of Veterans Affairs (VA) sites. The federal EHR provides flexibility to configure the EHR to meet specific facilities’ needs while still maintaining interoperability between the Departments. Through established governance and change control processes, DOD, VA and Department of Homeland Security’s U.S. Coast Guard (USCG) sites can each request configuration changes as long as these changes do not interfere or prohibit interoperability between the Departments. As a result, any Departments using the federal EHR can access these changes and benefit from additional capabilities their facility may need.

As a fully integrated joint sharing site, Lovell FHCC does require unique approaches to some configurations and deployment activities. These include the following:

Patient Care Location (PCL) Hierarchies. PCL hierarchies correspond to physical locations of patients receiving health care services, with facilities at the top level of the hierarchy followed by buildings, nursing units, rooms and beds. Unlike other sites that use either a DOD or VA PCL hierarchy, Lovell FHCC will use two PCL hierarchies—one for each Department, in their respective facilities. Using both DOD and VA PCL hierarchies enables each Department to satisfy their respective statutory requirements.
• Provider User Role Assignments: User roles enable Department-specific workflows, and assignments determine the training staff receive on the federal EHR. Lovell FHCC staff will be assigned a DOD or VA user role based on PCL alignment of the clinic or location where they provide care, not their Department alignment. Some areas, such as pharmacy, behavioral health and occupational health, will require staff assignment of both DOD and VA roles.
• Deployment Responsibilities: The FEHRM, DoD Healthcare Management System Modernization Program Management Office and VA’s Electronic Health Record Modernization Integration Office have roles and responsibilities in deploying the federal EHR at Lovell FHCC. These responsibilities vary for each of the high-level deployment events. For the adoption and site activation deployment events, DOD will lead, working closely with VA to supplement. Conversely, each Department will provide its respective training, infrastructure and devices to Lovell FHCC, based on PCL alignment. Regardless of role alignment, the Departments will execute all deployment events in a coordinated manner.
• Patient Portals. Lovell FHCC will use DOD and VA patient portals. DOD beneficiaries will use the DOD patient portal, and VA beneficiaries will use the VA patient portal. Dual eligible patients can use either portal.

Article link: https://www.linkedin.com/pulse/unique-approaches-lovell-fhcc-federal-ehr-deployment-fehrm

Eric Schmidt: This is how AI will transform how science gets done – MIT Technology Review

Posted by timmreardon on 07/05/2023
Posted in: Uncategorized.

Science is about to become much more exciting — and that will affect us all, argues Google’s former CEO.

By Eric Schmidt July 5, 2023

It’s yet another summer of extreme weather, with unprecedented heat waves, wildfires, and floods battering countries around the world. In response to the challenge of accurately predicting such extremes, semiconductor giant Nvidia is building an AI-powered “digital twin” for the entire planet. 

This digital twin, called Earth-2, will use predictions from FourCastNet, an AI model that uses tens of terabytes of Earth system data and can predict the next two week’s weather tens of thousands of times faster and more accurately than current weather forecasting methods.

Usual weather prediction systems have the capacity to generate around fifty predictions for the week ahead. FourCastNet can instead predict thousands of possibilities, accurately capturing the risk of rare but deadly disasters and thereby giving vulnerable populations valuable time to prepare and evacuate. 

The hoped-for revolution in climate modeling is just the beginning. With the advent of AI, science is about to become much more exciting — and in some ways unrecognizable. The reverberations of this shift will be felt far outside the lab and will affect us all. 

If we play our cards right with sensible regulation and proper support for innovative uses of AI to address science’s most pressing issues, AI can rewrite the scientific process. We can build a future where AI-powered tools will both save us from mindless and time-consuming labor and also propose creative inventions and discoveries, encouraging breakthroughs that would otherwise take decades.

Although AI in recent months has become almost synonymous with large language models, or LLMs, in science, there are a multitude of different model architectures that may drive even bigger impacts. In the past decade, most progress in science so far has been through smaller, “classical” models focused on specific questions. These models have already brought about profound advances in science. More recently, larger deep learning models that are beginning to incorporate cross-domain knowledge and generative AI have expanded what is possible.

Scientists at McMaster and MIT, for example, used an AI model to identify an antibiotic for a drug-resistant pathogen that the World Health Organization labeled as one of the world’s most dangerous antibiotic-resistant bacteria for hospital patients. A Google DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean energy revolution. Within healthcare, the FDA has already cleared 523 devices that use AI — 75 percent for use in radiology.

Reimagining science

At its core, the scientific process we all learned in elementary school will remain the same: conduct background research, identify a hypothesis, test it through experimentation, analyze the collected data, and reach a conclusion. But AI has the potential to revolutionize how each of these components looks in the future. 

Artificial intelligence is already transforming how some scientists conduct literature reviews. Tools like PaperQA and Elicit harness LLMs to scan databases of articles and produce succinct and accurate summaries of the existing literature — citations included.

Once the literature review is complete, scientists form a hypothesis to be tested. LLMs at their core work by predicting the next word in a sentence, thereby building up to entire sentences and paragraphs. This technique makes LLMs uniquely suited to scaled problems intrinsic to science’s hierarchical structure and can enable such models to predict the next big discovery in physics or biology.

AI can also spread the search net for hypotheses wider and narrow the net more quickly. As a result, AI tools can help formulate stronger hypotheses, such as models that spit out more promising candidates for new drugs. We’re already seeing simulations running multiple orders of magnitude faster than just a few years ago, allowing scientists to try more design options in simulation before carrying out real-world experiments. 

Scientists at CalTech, for example, used an AI fluid simulation model to automatically design a better catheter that prevents bacteria from swimming upstream and causing infections. This will fundamentally shift the incremental process of scientific discovery, allowing researchers to design for the optimal solution from the outset rather than progress through a long line of progressively better designs, as we saw in years of innovation on filaments in lightbulb design.

Moving onto the experimentation step, AI will be able to conduct experiments faster, cheaper, and at greater scale. For example, we can build AI-powered machines with hundreds of micropipettes running day and night to create samples at a rate no human could match. Instead of limiting themselves to just six experiments, scientists can use AI tools to run one thousand.

Scientists who are worried about their next grant, publication, or tenure process will no longer be bound to safe experiments with the highest odds of success, instead free to pursue bolder and more interdisciplinary hypotheses. When evaluating new molecules, for example, researchers tend to stick to candidates similar in structure to those we already know, but AI models do not have to have the same biases and constraints. 

Eventually, much of science will be conducted at “self-driving labs” — automated robotic platforms combined with artificial intelligence. Here, we can bring AI prowess from the digital realm into the physical world. Such self-driving labs are already emerging at companies like Emerald Cloud Laband Artificial and even at Argonne National Laboratory. 

Finally, at the stage of analysis and conclusion, self-driving labs will move beyond automation and, informed by experimental results they produced, use LLMs to interpret the results and recommend the next experiment to run. Then, as partners in the research process, the AI lab assistant could order supplies to replace those used in earlier experiments and set up and run the next recommended experiments overnight with results ready to deliver in the morning — all while the experimenter is home sleeping.

Possibilities and limitations

Young researchers might be shifting nervously in their seats at the prospect. Luckily, the new jobs that emerge from this revolution are likely to be more creative and less mindless than most current lab work. 

AI tools can lower the barrier to entry for new scientists and open up opportunities to those traditionally excluded from the field. With LLMs able to assist in building code, STEM students will no longer have to master obscure coding languages, opening the doors of the ivory tower to new, nontraditional talent and making it easier for scientists to engage with fields beyond their own. Soon, specifically trained LLMs might move beyond offering first drafts of written work like grant proposals and might be developed to offer “peer” reviews of new papers alongside human reviewers.

AI tools have incredible potential, but we must recognize where the human touch is still important and avoid running before we can walk. For example, successfully melding AI and robotics through self-driving labs will not be easy. There is a lot of tacit knowledge that scientists learn in labs that is difficult to pass to AI-powered robotics. Similarly, we should be cognizant of the limitations — and even hallucinations — of current LLMs before we offload much of our paperwork, research, and analysis to them. 

Companies like OpenAI and DeepMind are still leading the way in new breakthroughs, models, and research papers, but the current dominance of industry won’t last forever. DeepMind has so far excelled by focusing on well-defined problems with clear objectives and metrics. One of its most famous successes came at the Critical Assessment of Structure Prediction, a biennial competition where research teams predict a protein’s exact shape based on the order of its amino acids. 

From 2006 to 2016, the average score in the hardest category ranged from around 30 to 40 on CASP’s scale of one to 100. Suddenly, in 2018, DeepMind’s AlphaFold model scored a whopping 58. An updated version called AlphaFold2 scored 87 two years later, leaving its human competitors even further in the dust.

Thanks to open-source resources, we’re beginning to see a pattern where industry hits certain benchmarks and then academia steps in to refine the model. After DeepMind’s release of AlphaFold, Minkyung Baek and David Baker at the University of Washington released RoseTTAFold, which uses DeepMind’s framework to predict the structures of protein complexesinstead of only the single protein structures that AlphaFold could originally handle. More importantly, academics are more shielded from the competitive pressures of the market and so can venture beyond the well-defined problems and measurable successes that attract DeepMind. 

In addition to reaching new heights, AI can help verify what we already know by addressing science’s replicability crisis. Around 70 percent of scientists report having been unable to reproduce another scientist’s experiment — a disheartening percentage. As AI lowers the cost and effort of running experiments, it will in some cases be easier to replicate — or fail to replicate — results, contributing to a greater trust in science.

The key to replicability and trust is transparency. In an ideal world, everything in science would be open access, from articles without paywalls to open-source data, code, and models. Sadly with the dangers that such models are able to unleash, it isn’t always realistic to make all models open source. In many cases, the risks of being completely transparent outweigh the benefits of trust and equity. Nevertheless, to the extent that we can be transparent with models — especially classical AI models with more limited uses — we should be. 

The importance of regulation

With all these areas, it’s essential to remember the inherent limitations and risks of artificial intelligence. AI is such a powerful tool because it allows humans to accomplish more with less: less time, less education, less equipment. But these capabilities make it a dangerous weapon in the wrong hands. University of Rochester professor Andrew White was contracted by OpenAI to participate in a “red team” that could expose GPT-4’s risks before it was released. Using the language model and giving it access to tools, White found it could propose dangerous compounds and even order them from a chemical supplier. To test the process, he had a (safe) test compound shipped to his house the next week. OpenAI says it used his findings to tweak GPT-4 before it was released.

Even humans with entirely good intentions can still prompt AIs to produce bad outcomes. We should worry less about creating the Terminator and, as computer scientist Stuart Russell put it, more about becoming King Midas, who wished for everything he touched to turn to gold and thereby accidentally killed his daughter with a hug.

We have no mechanism to prompt an AI to change its goal, even when it reacts to its goal in a way we don’t anticipate. One oft-cited hypothetical asks you to imagine telling an AI to produce as many paperclips as possible. Determined to accomplish its goal, the model hijacks the electrical grid and kills any human who tries to stop it as the paperclips keep piling up. The world is left in shambles. The AI pats itself on the back; it has done its job. (In a wink to this famous thought experiment, many OpenAI employees carry around branded paperclips.)

OpenAI has managed to implement an impressive array of safeguards, but these will only remain in place as long as GPT-4 is housed on OpenAI’s servers. The day will likely soon come when someone manages to copy the model and house it on their own servers. Such frontier models need to be protected to prevent thieves from removing the AI safety guardrails so carefully added by their original developers.

To address both intentional and unintentional bad uses of AI, we need smart, well-informed regulation — on both tech giants and open-source models — that doesn’t keep us from using AI in ways it can be beneficial to science. Although tech companies have made strides in AI safety, government regulators are currently woefully underprepared to enact proper laws and should take greater steps to educate themselves on the latest developments.

Beyond regulation, governments — along with philanthropy — can support scientific projects with a high social return but little financial return or academic incentive. There are several areas with an especially urgent time to solution, including climate change, biosecurity, and pandemic preparedness. It is in these areas that we most urgently need the speed and scale that AI simulations and self-driving labs offer. 

Government can also help develop large, high-quality datasets such as those on which AlphaFold relied — insofar as safety concerns allow. Open datasets are public goods: They benefit many researchers, but researchers have little incentive to create them themselves. Government and philanthropic organizations can work with universities and companies to pinpoint seminal challenges in science that would benefit from access to powerful databases. 

Chemistry, for example, has one language that unites the field, which would seem to lend itself to easy analysis by AI models. But no one has properly aggregated data on molecular properties stored across dozens of databases, which keeps us from accessing insights into the field that would be within reach of AI models if we had a single source. Biology, meanwhile, lacks the known and calculable data that underlies physics or chemistry, with subfields like intrinsically disordered proteins still a mystery to us. It will therefore require a more concerted effort to understand — and even record — the data for an aggregated database.

The road ahead to broad AI adoption in the sciences is long, with a lot that we must get right, from building the right databases to implementing the right regulations, mitigating biases in AI algorithms to ensuring equal access across borders to resources like compute and GPUs. 

Nevertheless, this is a profoundly optimistic moment. Previous paradigm shifts in science, like the emergence of the scientific process or big data, have been inwardly focused — making science more precise, accurate, and methodical. AI meanwhile is expansive, allowing us to combine information in novel ways and to bring creativity and progress in the sciences to new heights.

BIO: Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently co-Founder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, applying science and technology, and bringing people together across fields.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science/amp/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
    • Agentic AI, explained – MIT Sloan 02/18/2026
    • Anthropic’s head of AI safety Mrinank Sharma resigns, says ‘world is in peril’ in resignation letter 02/10/2026
    • Moltbook was peak AI theater 02/09/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (4)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...