healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power – Nextgov

Posted by timmreardon on 06/01/2023
Posted in: Uncategorized.

By ANJANA SUSARLA The Conversation June 1, 2023 11:35 AM ET

Is a new federal agency necessary to regulate AI? Maybe.

OpenAI CEO Sam Altman urged lawmakers to consider regulating AI during his Senate testimony on May 16, 2023. That recommendation raises the question of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting. But what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks.

Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly.

As a researcher who studies social media and artificial intelligence, I believe that Altman’s suggestions have highlighted important issues but don’t provide answers in and of themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies’ economic power and political sway.

AN AGENCY TO REGULATE AI?

Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman’s testimony. The European Union’s AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example.

The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks.

Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on some of the risks inherent in AI. The Consumer Product Safety Commission and other agencies have a role to play as well.

Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy.

Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers today.

LICENSING AUDITORS, NOT COMPANIES

Though OpenAI’s Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with humanlike intelligence that could pose a threat to humanity. That would be akin to companies being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a role to play well before such a futuristic scenario comes to pass.

Algorithmic auditing would require credentialing, standards of practice and extensive training. Requiring accountability is not just a matter of licensing individuals but also requires companywide standards and practices.

Experts on AI fairness contend that issues of bias and fairness in AI cannot be addressed by technical methods alone but require more comprehensive risk mitigation practices such as adopting institutional review boards for AI. Institutional review boards in the medical field help uphold individual rights, for example.

Academic bodies and professional societies have likewise adopted standards for responsible use of AI, whether it is authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.

Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It’s also important to recognize that greater data accountability and transparency may impose new restrictions on organizations.

Scholars of data privacy and AI ethics have called for “technological due process” and frameworks to recognize harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance and health care calls for licensing and audit requirements to ensure procedural fairness and privacy safeguards.

Requiring such accountability provisions, though, demands a robust debate among AI developers, policymakers and those who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the appearance of compliance.

AI MONOPOLIES?

What was also missing in Altman’s testimony is the extent of investmentrequired to train large-scale AI models, whether it is GPT-4, which is one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for developing the world’s largest language models.

Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale.

It is also important to acknowledge that the training data for tools such as ChatGPT includes the intellectual labor of a host of people such as Wikipedia contributors, bloggers and authors of digitized books. The economic benefits from these tools, however, accrue only to the technology corporations.

Proving technology firms’ monopoly power can be difficult, as the Department of Justice’s antitrust case against Microsoftdemonstrated. I believe that the most feasible regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for AI firms and users of AI alike, to urge comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Article link: https://www.nextgov.com/ideas/2023/06/how-can-congress-regulate-ai-erect-guardrails-ensure-accountability-and-address-monopolistic-power/386987/

White House Releases New AI National Frameworks, Educator Recommendations – Nextgov

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

The Biden administration unveiled a docket full of more artificial intelligence regulatory efforts to promote responsible development, adoption and usage of increasingly smart systems.

The White House launched a series of new executive initiatives on fostering a culture of responsible artificial intelligence technology usage and practice within the U.S. on Tuesday, featuring a national strategic R&D plan and education objectives.

Following previous national frameworks, the three new announcements from the Biden administration act as guidelines to help codify responsible and effective AI algorithm usage, development and deployment, absent federal law.

“The federal government plays a critical role in this effort, including through smart investments in research and development (R&D) that promote responsible innovation and advance solutions to the challenges that other sectors will not address on their own,” the strategic plan executive summary reads.

Among the three announcements include a new roadmap of priority R&D areas in the AI sector for federal investments, a public request for information on how the federal government can best mitigate AI system risk, and an analysis documenting benefits and risks to AI technologies in education.

The R&D Strategic Plan, developed by the White House Office of Science and Technology Policy, is composed of several pillars to invest in safe-by-design AI systems that can be implemented in a social context. Those pillars include prioritizing long-term investments in responsible AI; developing methods for enhanced human-AI collaboration and understanding; thoroughly compiling a definitive list of ethical, legal and societal risks and benefits to AI system deployment; developing shared public datasets for broad AI algorithmic training; evaluating the needs of an AI-savvy workforce; expanding public and private sector partnerships; and establishing international collaborations on AI research efforts.

“The federal government plays a critical role in ensuring that technologies like AI are developed responsibly, and to serve the American people,” the plan’s fact sheet reads. “Federal investments over many decades have facilitated many key discoveries in AI innovations that power industry and society today, and federally funded research has sustained progress in AI throughout the field’s evolution.”

Complimenting the R&D plan are new insights into how new AI technologies can impact classroom learning and the broader educational system. Authored by leadership in the Department of Education, the report recommends ways educators can leverage AI-powered systems––namely exam monitoring, writing assistance and voice recognition devices––to their benefit, while mitigating potential risks.  

Countering bias and data exposure in these systems was a paramount discussion point, leading regulators to broadly recommend all future education policies dealing with AI at a federal, state and local level keep user needs, feedback and empowerment in mind. 

“As protections are developed, we recommend that policies center people, not machines,” the recommendations read. “Teachers, learners and others need to retain their agency to decide what patterns mean and to choose courses of action.”

DOE leadership also reiterated that AI technologies should not displace teachers.

“Some teachers worry that they may be replaced—to the contrary, the Department firmly rejects the idea that AI could replace teachers,” the recommendation states. 

The final AI announcement requests public input on a new National AI Strategy. The forthcoming guidance aims to build on existing Biden-Harris administration actions surrounding AI and machine learning to further chart the nation’s course into a safe and integrated future with AI technologies. 

“By developing a National AI Strategy, the federal government will provide a whole-of-society approach to AI,” the RFI background says. “The strategy will pay particular attention to recent and projected advances in AI, to make sure that the United States is responsive to the latest opportunities and challenges posed by AI, as well as the global changes that will arrive in the coming years.”

Comments from the public will be accepted until July 7, 2023. Some of the questions officials ask discuss best oversight practices of AI technologies, how AI language models can maintain secure software designs, how AI can strengthen civil rights and how AI can better identify digital vulnerabilities in critical infrastructures’ digital networks. 

These comments reflect the broad goals of a forthcoming National AI Strategy that seeks to incorporate AI systems into a broad array of societal institutions, while simultaneously controlling for common risks.

On top of releasing new plans for more national AI technology oversight, the Biden administration will host a conversation with American workers today to hear concerns over automation and its broader economic impact. 

AI regulation has been a chief talking point across the federal government following the breakthrough prevalence of generative AI systems such as ChatGPT, as a lack of sweeping regulations haunt the continued innovation in the AI/ML field.

Article link: https://www.nextgov.com/emerging-tech/2023/05/white-house-releases-new-ai-national-frameworks-educator-recommendations/386691/

Suddenly, everyone wants to talk about how to regulate AI – MIT Technology Review

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.


Plus: Meta’s new AI models for speech.

By Melissa Heikkilä May 23, 2023

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It feels as though a switch has turned on in AI policy. For years, US legislators and American tech companies were reluctant to introduce—if not outright against—strict technology regulation. Now both have started begging for it.

Last week, OpenAI CEO Sam Altman appeared before a US Senate committee to talk about the risks and potential of AI language models. Altman, along with many senators, called for international standards for artificial intelligence. He also urged the US to regulate the technology and set up a new agency, much like the Food and Drug Administration, to regulate AI. 

For an AI policy nerd like myself, the Senate hearing was both encouraging and frustrating. Encouraging because the conversation seems to have moved past promoting wishy-washy self-regulation and on to rules that could actually hold companies accountable. Frustrating because the debate seems to have forgotten the past five-plus years of AI policy. I just published a story looking at all the existing international efforts to regulate AI technology. You can read it here. 

I’m not the only one who feels this way. 

“To suggest that Congress starts from zero just plays into the industry’s favorite narrative, which is that Congress is so far behind and doesn’t understand technology—how could they ever regulate us?” says Anna Lenhart, a policy fellow at the Institute for Data Democracy and Policy at George Washington University, and a former Hill staffer. 

In fact, politicians in the last Congress, which ran from January 2021 to January 2023, introduced a ton of legislation around AI. Lenhart put together this neat list of all the AI regulations proposed during that time. They cover everything from risk assessments to transparency to data protection. None of them made it to the president’s desk, but given that buzzy (or, to many, scary) new generative AI tools have captured Washington’s attention, Lenhart expects some of them to be revamped and make a reappearance in one form or another. 

Here are a few to keep an eye on. 

Algorithmic Accountability Act

This bill was introduced by Democrats in the US Congress and Senate in 2022, pre-ChatGPT, to address the tangible harms of automated decision-making systems, such as ones that denied people pain medications or rejected their mortgage applications.

The bill would require companies to do algorithmic impact and risk assessments, says Lenhart. It would also put the Federal Trade Commission in charge of regulating and enforcing rules around AI, and boost its staff numbers.

American Data Privacy Protection Act

This bipartisan bill was an attempt to regulate how companies collect and process data. It gained lots of momentum as a way to help women keep their personal health data safe after Roe v. Wade was overturned, but it failed to pass in time. The debate around the risks of generative AI could give it the added urgency to go further than last time. ADPPA would ban generative AI companies from collecting, processing, or transfering data in a discriminatory way. It would also give users more control over how companies use their data. 

An AI agency

During the hearing, Altman and several senators suggested we need a new US agency to regulate AI. But I think this is a bit of a red herring. The US government needs more technical expertise and resources to regulate the tech, whether it be in a new agency or in a revamped existing one, Lenhart says. And more importantly, any regulator, new or old, needs the power to enforce the laws. 

“It’s easy to create an agency and not give it any powers,” Lenhart says. 

Democrats have tried to set up new protections with the Digital Platform Commission Act, the Data Protection Act, and the Online Privacy Act. But these attempts have failed, as most US bills without bipartisan support are doomed to do. 

What’s next?

Another tech-focused agency is likely on the way. Senators Lindsey Graham, a Republican, and Elizabeth Warren, a Democrat, are working together to create a new digital regulator that might also have the power to police and perhaps license social media companies. 

Democrat Chuck Schumer is also rallying the troops in the Senate to introduce a new bill that would tackle AI harms specifically. He has gathered bipartisan support to put together a comprehensive AI bill that would set up guardrails aimed at promoting responsible AI development. For example, companies might be required to allow external experts to audit their tech before it is released, and to give users and the government more information about their AI systems. 

And while Altman seems to have won the Senate Judiciary Committee over, leaders from the commerce committees in both the House and Senate need to be on board for a comprehensive approach to AI regulation to become law, Lenhart says. 

And it needs to happen fast, before people lose their interest in generative AI. 

“It’s gonna be tricky, but anything’s possible,” Lenhart says.

Deeper Learning

Meta’s new AI models can recognize and produce speech for more than 1,000 languages

Meta has built AI models that can recognize and produce speech for more than 1,000 languages—a tenfold increase on what’s currently available.

Why this matters: It’s a significant step towards preserving languages that are at risk of disappearing, the company says. There are around 7,000 languages in the world, but existing speech recognition models only cover approximately 100 languages comprehensively. This is because these kinds of models tend to require huge amounts of labeled training data, which is only available for a small number of languages, including English, Spanish, and Chinese. Read more from Rhiannon Williams here.

Bits and Bytes

Google and Apple’s photo apps still can’t find gorillas
Eight years ago, Google’s photo app mislabeled pictures of Black people as gorillas. The company prevented any pictures from being labeled as apes as a temporary fix. But years later, tech companies haven’t found a solution to the problem, despite big advancements in computer vision (The New York Times)

Apple bans employees from using ChatGPT
It’s worried the chatbot might leak confidential company information. This is not an unreasonable concern, given that just a couple of months ago OpenAI had to pull ChatGPT offline because of a bug that leaked user chat history. (The Wall Street Journal) 

Here’s how AI will ruin work for everyone
Big Tech’s push to integrate AI into office tools will not spell the end of human labor. It’s the opposite: the easier work becomes, the more we will be expected to do. Or as Charlie Warzel writes, this AI boom is going to be less Skynet, more Bain & Company. (The Atlantic)

Does Bard know how many times “e” appears in “ketchup”?
This was a fun piece with a serious purpose: lifting the lid on how large language models work. Google’s chatbot Bard doesn’t know how many letters different words have. This is because instead of recognizing individual letters, these models form words using “tokens.” So for example, Bard would think the first letter in the word “ketchup” was “ket,” not “k.” (The Verge)

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/05/23/1073526/suddenly-everyone-wants-to-talk-about-how-to-regulate-ai/amp/

House Veterans Affairs – Subcommittee on Technology Modernization Oversight Hearing

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

VA Lacks Goals to Assess Satisfaction With New EHR, Watchdog Finds – Nextgov

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

The report from the Government Accountability Office found that the absence of such goals has limited the agency’s ability to “objectively measure progress toward improving EHRM users’ satisfaction.”

The Department of Veterans Affairs has not established goals to assess user satisfaction with its new Oracle Cerner electronic health record system—compounding broader concerns about the department’s lack of adherence to leading change management practices—according to a report the Government Accountability Office released on Thursday. 

The review—which the watchdog noted was mandated by congressional report language “associated with the VA appropriations for fiscal years 2020 through 2022”—assessed clinician and staff satisfaction with the Oracle Cerner software, examined the department’s change management strategies for the system’s rollout and “identified and addressed EHR system issues.”

VA’s deployment of its new multi-billion dollar EHR system has been beset by performance and technical glitches, patient safety concerns and cost overruns since its inaugural rollout at the Mann-Grandstaff VA Medical Center in Spokane, Washington in 2020. This has included reports of veteran deaths associated with the software’s use, as well as a highly critical report issued last July by the VA Inspector General’s office that found that more than 11,000 veterans’ clinical orders at the Spokane medical center were routed to an “unknown queue” without notifying clinicians.

VA announced last month that it was delaying future deployments of the Oracle Cerner software until it “is confident that the new EHR is highly functioning at current sites and ready to deliver for veterans and VA clinicians at future sites.” The new system has been deployed at only five of VA’s 171 medical centers.

GAO’s report found that VA “has taken steps to obtain feedback on the performance and implementation” of the Oracle Cerner EHR system, including by contracting with an outside vendor in September 2022 to conduct surveys of users’ satisfaction with the software, but that the results showed that the vast majority of respondents “were not satisfied with the performance of the new system or the training for the new system.”

“For example, about 79 percent (1,640 of 2,066) of users disagreed or strongly disagreed that the system enabled quality care,” the report noted. “In addition, about 89 percent (1,852 of 2,074) of users disagreed or strongly disagreed that the system made them as efficient as possible.”

Despite conducting the survey, GAO found that VA “has not established targets (i.e., goals) to assess user satisfaction,” which has left the department “limited in its ability to objectively measure progress toward improving EHRM users’ satisfaction with the system.” 

The lack of targeted metrics for user satisfaction, the report noted, means that VA “will also lack a basis for determining when satisfaction has improved,” which “would help ensure that the system is not prematurely deployed to additional sites, which could risk patients’ safety.”

Concerns about VA’s oversight and accountability of the software modernization program also extended to the functionality of the new EHR system. While GAO noted that VA assessed the performance of the Oracle Cerner software at two deployment sites, it found that “as of January 2023, it had not conducted an independent operational assessment, as originally planned and consistent with leading practices for software verification and validation.”

“Without such an independent assessment, VA will be limited in its ability to (1) validate that the system is operationally suitable and effective, and (2) identify, track and resolve key operational issues,” the report added.

Additionally, GAO said that VA “did not adequately identify and address system issues,” including ensuring “that trouble tickets for the new EHR system were resolved within timeliness goals.” While the department has worked with an outside contractor “to reduce the number of tickets that were over 45 days old,” the review said that “the overall number of open tickets has steadily increased since 2020.”

“Until the program fully implements the leading practices for change management, future deployments risk continuing change management challenges that can hinder effective use of the new electronic health record system,” the report said. 

GAO offered ten recommendations to VA, including calling for the department “to address change management, user satisfaction, system trouble ticket and independent operational assessment deficiencies.” VA concurred with the watchdog’s recommendations.

Article link: https://www.nextgov.com/it-modernization/2023/05/va-lacks-goals-assess-satisfaction-new-ehr-watchdog-finds/386629/

The Air Gap Is Dead. It’s Time for Industrial Organisations to Embrace the Cloud

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

By Alex Nehmy

The air gap is dead.

The notion of having air-gapped computer systems from the primary corporate environment and the internet is antiquated, steeped more in fairy-tale romance than reality. 

An air gap consists of two networks, so there’s a gap between them consisting of air. The Australian Cyber Security Centre defines an air gap as “A network security measure employed on one or more computers to ensure the network is physically isolated from any other network. This makes the isolated network secure, as it doesn’t connect to unsecured networks like the public internet.”

Air gaps make great sense from a cybersecurity perspective—data and threats cannot traverse from one network to another. An air-gapped network is akin to an island, safe, secure, and isolated from other networks that have lesser security and more significant threats. Hence air gaps are used in extreme risk or secretive environments such as nuclear power generation and highly classified defence systems.

However, cybersecurity doesn’t operate in a vacuum. It exists to empower an organisation’s digital transformation objectives while managing cyber risk. Cybersecurity controls are often inherently at odds with the useability of IT systems. The greater the cybersecurity controls, the less usable and business-friendly the outcome. Air gaps restrict communication, and hence, they do not meet business requirements for modern, dynamic, and flexible communications networks.

IT and OT Are More Connected Than Ever

The greatest misconception these days is that critical infrastructureorganisations still have an air gap. However, the overwhelming majority of industrial operational technology (OT) environments are indeed not air-gapped; they’re physically connected to IT and logically separated by a firewall. As these critical infrastructure organisations are undergoing their own digital transformations, they are increasingly reliant on data from the industrial OT environment in order to run their business systems in IT. In fact, IT and OT are now more connected than ever. An air gap does not support this business-critical connectivity.

Let’s take the case of the Colonial Pipeline ransomware incident. The Darkside cybercrime group infected the IT environment with ransomware, effectively locking key business systems, including the billing system. The billing system relies on data from Colonial Pipeline’s OT environment to measure gas usage and bill customers. This data exchange from OT into IT is key to the financial operation of the business. An air gap would break this business-critical communication and therefore is not feasible. 

As the ransomware rendered the billing system inoperable, Colonial Pipeline took the unprecedented step of disabling the gas pipeline, which services the southeastern United States, resulting in the most materially significant cyberattack in United States history.

OT Has Converged with IT, While IT Has Converged with the Cloud

Just as IT and OT have converged and can no longer be separated, so too has IT converged with the cloud. Remote working collaboration tools, cloud-based business management systems, and cloud data centres are the standard for IT in a post-pandemic world. In fact, for many modern organisations, the cloud is inseparable from IT. They have wholly merged. 

Businesses are striving for more agile operations, lower costs, and greater customer satisfaction, and the cloud has been integral in many IT businesses achieving this. 

In comparison to IT, OT is the last bastion of on-premises computing. There are no technical or cybersecurity reasons why the cloud cannot be used to transform the operations of OT. The primary limitation is a cultural one. 

The cloud offers a massively scalable platform with efficiencies and capabilities that are difficult to match with in-house data centres. And OT is the literal heart of any industrial business. Why wouldn’t a company want to embrace the benefits of the cloud to extract maximum value from their most important business systems and data? There are untold benefits awaiting ….

Using Risk to Guide Cloud Usage

How can we begin to move the needle on cultural change within OT to embrace the cloud? A risk-based approach, combined with a focus on delivering transformational business outcomes, is our best bet. 

When it comes to risk, there are two key types of data within OT, each with its own risk profile. They are primary control system data and telemetry data from internet of things (IoT) devices in the field.

Primary control system data has the ability to control or directly affect the OT environment and as a result, it is high risk. For example, in electricity distribution, it can be used to literally turn the power on or off, potentially resulting in life-or-death situations for both employees and critical care customers.

Alternatively, IoT telemetry is merely providing a real-time view into the operational environment from IoT sensors in the field and does not have control of the critical infrastructure. It is, therefore, a much lower risk. The IoT field-based sensors are collecting data about temperature, vibration, pressure or almost anything that can be measured to provide a real-time picture of how the physical world is operating. This data, when combined with the power of the cloud, will drive significant business outcomes that, to date, have not been realised. 

There is a big difference in the risk posed by each of these data sources, and as such, the data should be handled differently based on risk. Primary control system data will likely remain on-premises for the foreseeable future, while IoT telemetry is low-risk enough to be handled in the cloud. Indeed, the sheer volume of IoT data and the insights available through machine learning will necessitate the use of cloud computing. 

Embracing the Benefits of Cloud Computing for Industrial Environments

The benefits of embracing the cloud for low-risk data, such as IoT telemetry, are numerous:

Real-Time Visibility for Better Decision-Making

IoT sensors in the field generate a constant stream of data, which provides real-time visibility into industrial operations, whether that’s monitoring manufactured goods for defects or the voltage of electricity distribution networks. 

Rich, real-time data allows for greater visibility and understanding of industrial environments, leading to better decision-making and increased operational efficiencies.

Predictive Maintenance for Higher Availability

Predictive maintenance uses IoT telemetry to monitor physical assets in the field for signs of abnormal behaviour that may indicate the asset is about to fail. For example, in manufacturing, knowing when critically important production machinery is about to fail allows the asset to be fixed just before failure. This results in a decrease in unplanned downtime, increasing plant efficiency and maximising the output of operational systems. 

Better Customer Outcomes

Ultimately embracing the benefits of cloud computing to drive the efficiency and availability of industrial operations has a flow-on effect on the customer through reducing costs and increasing responsiveness. 

Cybersecurity Uplift

One final benefit of embracing the cloud is increased cybersecurity and OT system availability. We know that cyberthreats to OT environments are increasing, and an incident within an OT environment (or in the case of Colonial Pipeline, within an IT environment) can affect the availability of business-critical OT systems and services. 

Cloud-enhanced cybersecurity systems provide an immediate maturity uplift to best secure these critical operational environments. Should a threat actor gain access to OT, their actions cannot be predicted or controlled and are likely to result in unplanned outages and impact industrial business operations. 

The data used by these next-generation security systems is primarily network and endpoint telemetry, also known as metadata, which is akin to IoT telemetry and is equally low-risk. 

Securing an OT environment with cloud-enhanced cybersecurity systems reduces the likelihood of malicious activities taking place, further protecting the availability of key OT systems.

A Secure OT Environment Is Also an Available OT Environment 

The digital transformation that IT has realised through embracing the cloud is also waiting for OT. More efficient operations, better insights and decision-making, and higher availability of key industrial systems are just a few of the benefits. 

It’s time for OT to move past any cultural inhibitors and use risk and business value as drivers for their cloud transformation.

Article link: https://www.paloaltonetworks.com/cybersecurity-perspectives/the-air-gap-is-dead

Oracle cuts 3,000 jobs at electronic healthcare records firm Cerner

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

Oracle has reportedly laid off more than 3,000 employees at the electronic healthcare records firm Cerner which it acquired for $28.4 billion, according to an Insider report.

acle paused raises and promotions and laid off thousands of employees in the unit as recently as this month after the acquisition closed in June last year.

The Cerner acquisition had brought in about 28,000 employees.

Oracle has not issued raises or granted promotions, and, earlier this year, announced that workers shouldn’t expect any through 2023.

Layoffs affected workers across teams, including marketing, engineering, accounting, legal, and product.

Oracle did not comment on the report.

The Cloud major is developing a national health records database.

According to Oracle’s Chairman and Chief Technology Officer Larry Ellison, the patient data would be anonymous until individuals give consent to share their information.

Cerner is a provider of digital information systems used within hospitals and health systems to enable medical professionals to deliver better healthcare to individual patients and communities.

Oracle’s new health records database will also involve the patient engagement system the company has been developing throughout the pandemic.

The Cloud major is also working on the patient engagement system’s ability to collect information from wearables and home diagnostic devices.

Article link; https://infotechlead.com/cloud/oracle-cuts-3000-jobs-at-electronic-healthcare-records-firm-cerner-78383

VA Puts Oracle Cerner on a Short Leash in $10B Health Records Contract – Nextgov

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

By ADAM MAZMANIANMAY 16, 2023 06:30 PM ET

The agency extended the contract for its EHR provider by one year, and put performance conditions in place.

The Department of Veterans Affairs extended its $10 billion contract with Oracle Cerner to deliver its electronic health record software, but added some accountability measures designed to improve performance on the stalled and troubled modernization program.

Dr. Neil Evans, who currently leads the Office of Electronic Health Records Modernization on an interim basis, said in a statement that the new contract “now includes stronger performance metrics and expectations” over multiple areas including reliability, responsiveness, interoperability with outside health care systems and with other VA applications.

The extension was inked, as scheduled, at the conclusion of its fifth year of a possible 10. VA opted to extend the contract for just a single year, rather the full 5-year period allowable under the terms of the 2018 agreement.

“VA will have the opportunity to review our progress and renegotiate again in a year if need be,” Evans said in a statement. 

The new deal includes opportunities for penalties and redress if Oracle Cerner misses its targets over 28 performance metrics, including uptime, help desk speed and effectiveness and more.

The news comes as VA is in the midst of a pause of new deployments of the Oracle Cerner system, which currently is in service in just five clinical settings.

“The system has not delivered for veterans or VA clinicians to date, but we are stopping at nothing to get this right—and we will deliver the efficient, well-functioning system that veterans and clinicians deserve,” Evans said.

Top Republicans on the House Veterans Affairs Committee would certainly agree that the Oracle Cerner system isn’t working as hoped, but they have doubts that the new contract terms are the answer.

“While we appreciate that VA is starting to build accountability into the Oracle Cerner contract, the main questions we have about what will be different going forward remain unanswered,” Reps. Mike Bost (R-Ill.) and Matt Rosendale (R-Mont.) said in a joint statement. “We need to see how the division of labor between Oracle, VA and other companies is going to change and translate into better outcomes for veterans and savings for taxpayers. This shorter-term contract is an encouraging first step, but veterans and taxpayers need more than a wink and a nod that the project will improve.” 

Bost, who chairs the Veterans Affairs Committee, and Rosendale, who leads a subcommittee in charge of oversight of VA tech, have introduced legislation to force Oracle Cerner to hit specific performance targets consistently under the threat of canceling the program and reverting to VA’s homegrown electronic health records system VistA.

Article link: https://www.nextgov.com/emerging-tech/2023/05/va-puts-oracle-cerner-short-leash-10b-health-records-contract/386450/

Summary of National Cybersecurity Strategy with Similarity Analysis to Executive Order 14028, ‘Improving the Nation’s Cybersecurity’ – IDA

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

March, 2023
IDA document: D-33439
FFRDC: Systems and Analyses Center 
Type: Documents 
Division: Information Technology and Systems Division 

Authors

(OPEN EXTERNAL LINK)

cover

This document provides a tabularized and shortened version of the National Cybersecurity Strategy (March 2023) along with analytical products that elucidate key themes and terms in the strategy, as well as an analysis of similarities to the May 2021 Executive Order about cybersecurity.

Article link: https://www.ida.org/research-and-publications/publications/all/s/su/summary-of-national-cybersecurity-strategy-with-similarity-analysis-to-executive-order-14028

Quantum Cryptography Market to Exceed $3B by 2028 – Nextgov

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

By FRANK KONKELMAY 15, 2023 02:58 PM ET

The growth reflects rising concern about the potential threat posed by fully realized quantum computers.

The global quantum cryptography market will be worth an estimated $500 million in 2023, but—much like the rapidly evolving technology itself—the market is expected to grow rapidly over the next half-decade, according to a forecast issued by Ireland-based research firm MarketsandMarkets.

Issued in May, the forecast expects the quantum cryptography market to increase at a compound annual growth rate of more than 40% over the next five years, topping $3 billion by 2028. The forecast defines quantum cryptography as “a method of securing communication that uses the principles of quantum mechanics” to secure communication channels and data.

While the market for quantum cryptography products is expected to grow rapidly in the coming years, it’s already a highly competitive arena, in part due to the technical complexity required to commercialize the technology. The market includes companies that develop quantum standards, quantum random number generators and quantum key distribution systems.

The market’s growth mirrors interest in quantum cryptography—and quantum computing in general—across the government. Last November, the Office of Management and Budget released a memooutlining the need for federal agencies to begin migrating to post-quantum cryptography to prepare for the onset of commercialized quantum computers. The Government Accountability Office, which acts as the investigative arm of Congress,recently offered fuel to the fire of concernover quantum computers, stating that true quantum computers could break traditional methods of encryption commonly in use by industry and government agencies.

Not coincidentally, the government segment accounts for the largest quantum cryptography market share over the next five years.

“With the increasing use of mobility, government bodies across the globe have progressively started using mobile devices to enhance workers’ productivity and improve the functioning of public sector departments,” the forecast states. “They must work on critical information, intelligence reports and other confidential data. It can help protect sensitive data from hackers and provide a secure platform for conducting transactions and exchanging information.”

Article link: https://www.nextgov.com/cybersecurity/2023/05/quantum-cryptography-market-exceed-3-billion-2028/386355/

Posts navigation

← Older Entries
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power – Nextgov 06/01/2023
    • White House Releases New AI National Frameworks, Educator Recommendations – Nextgov 05/29/2023
    • Suddenly, everyone wants to talk about how to regulate AI – MIT Technology Review 05/29/2023
    • House Veterans Affairs – Subcommittee on Technology Modernization Oversight Hearing 05/29/2023
    • VA Lacks Goals to Assess Satisfaction With New EHR, Watchdog Finds – Nextgov 05/29/2023
    • The Air Gap Is Dead. It’s Time for Industrial Organisations to Embrace the Cloud 05/29/2023
    • Oracle cuts 3,000 jobs at electronic healthcare records firm Cerner 05/29/2023
    • VA Puts Oracle Cerner on a Short Leash in $10B Health Records Contract – Nextgov 05/29/2023
    • Summary of National Cybersecurity Strategy with Similarity Analysis to Executive Order 14028, ‘Improving the Nation’s Cybersecurity’ – IDA 05/29/2023
    • Quantum Cryptography Market to Exceed $3B by 2028 – Nextgov 05/29/2023
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • June 2023 (1)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

    No upcoming events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Follow Following
    • healthcarereimagined
    • Join 137 other followers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...