Archives
All posts by timmreardon

The report from the Government Accountability Office found that the absence of such goals has limited the agency’s ability to “objectively measure progress toward improving EHRM users’ satisfaction.”
The Department of Veterans Affairs has not established goals to assess user satisfaction with its new Oracle Cerner electronic health record system—compounding broader concerns about the department’s lack of adherence to leading change management practices—according to a report the Government Accountability Office released on Thursday.
The review—which the watchdog noted was mandated by congressional report language “associated with the VA appropriations for fiscal years 2020 through 2022”—assessed clinician and staff satisfaction with the Oracle Cerner software, examined the department’s change management strategies for the system’s rollout and “identified and addressed EHR system issues.”
VA’s deployment of its new multi-billion dollar EHR system has been beset by performance and technical glitches, patient safety concerns and cost overruns since its inaugural rollout at the Mann-Grandstaff VA Medical Center in Spokane, Washington in 2020. This has included reports of veteran deaths associated with the software’s use, as well as a highly critical report issued last July by the VA Inspector General’s office that found that more than 11,000 veterans’ clinical orders at the Spokane medical center were routed to an “unknown queue” without notifying clinicians.
VA announced last month that it was delaying future deployments of the Oracle Cerner software until it “is confident that the new EHR is highly functioning at current sites and ready to deliver for veterans and VA clinicians at future sites.” The new system has been deployed at only five of VA’s 171 medical centers.
GAO’s report found that VA “has taken steps to obtain feedback on the performance and implementation” of the Oracle Cerner EHR system, including by contracting with an outside vendor in September 2022 to conduct surveys of users’ satisfaction with the software, but that the results showed that the vast majority of respondents “were not satisfied with the performance of the new system or the training for the new system.”
“For example, about 79 percent (1,640 of 2,066) of users disagreed or strongly disagreed that the system enabled quality care,” the report noted. “In addition, about 89 percent (1,852 of 2,074) of users disagreed or strongly disagreed that the system made them as efficient as possible.”
Despite conducting the survey, GAO found that VA “has not established targets (i.e., goals) to assess user satisfaction,” which has left the department “limited in its ability to objectively measure progress toward improving EHRM users’ satisfaction with the system.”
The lack of targeted metrics for user satisfaction, the report noted, means that VA “will also lack a basis for determining when satisfaction has improved,” which “would help ensure that the system is not prematurely deployed to additional sites, which could risk patients’ safety.”
Concerns about VA’s oversight and accountability of the software modernization program also extended to the functionality of the new EHR system. While GAO noted that VA assessed the performance of the Oracle Cerner software at two deployment sites, it found that “as of January 2023, it had not conducted an independent operational assessment, as originally planned and consistent with leading practices for software verification and validation.”
“Without such an independent assessment, VA will be limited in its ability to (1) validate that the system is operationally suitable and effective, and (2) identify, track and resolve key operational issues,” the report added.
Additionally, GAO said that VA “did not adequately identify and address system issues,” including ensuring “that trouble tickets for the new EHR system were resolved within timeliness goals.” While the department has worked with an outside contractor “to reduce the number of tickets that were over 45 days old,” the review said that “the overall number of open tickets has steadily increased since 2020.”
“Until the program fully implements the leading practices for change management, future deployments risk continuing change management challenges that can hinder effective use of the new electronic health record system,” the report said.
GAO offered ten recommendations to VA, including calling for the department “to address change management, user satisfaction, system trouble ticket and independent operational assessment deficiencies.” VA concurred with the watchdog’s recommendations.
Article link: https://www.nextgov.com/it-modernization/2023/05/va-lacks-goals-assess-satisfaction-new-ehr-watchdog-finds/386629/
By Alex Nehmy
The air gap is dead.
The notion of having air-gapped computer systems from the primary corporate environment and the internet is antiquated, steeped more in fairy-tale romance than reality.
An air gap consists of two networks, so there’s a gap between them consisting of air. The Australian Cyber Security Centre defines an air gap as “A network security measure employed on one or more computers to ensure the network is physically isolated from any other network. This makes the isolated network secure, as it doesn’t connect to unsecured networks like the public internet.”
Air gaps make great sense from a cybersecurity perspective—data and threats cannot traverse from one network to another. An air-gapped network is akin to an island, safe, secure, and isolated from other networks that have lesser security and more significant threats. Hence air gaps are used in extreme risk or secretive environments such as nuclear power generation and highly classified defence systems.
However, cybersecurity doesn’t operate in a vacuum. It exists to empower an organisation’s digital transformation objectives while managing cyber risk. Cybersecurity controls are often inherently at odds with the useability of IT systems. The greater the cybersecurity controls, the less usable and business-friendly the outcome. Air gaps restrict communication, and hence, they do not meet business requirements for modern, dynamic, and flexible communications networks.
IT and OT Are More Connected Than Ever
The greatest misconception these days is that critical infrastructureorganisations still have an air gap. However, the overwhelming majority of industrial operational technology (OT) environments are indeed not air-gapped; they’re physically connected to IT and logically separated by a firewall. As these critical infrastructure organisations are undergoing their own digital transformations, they are increasingly reliant on data from the industrial OT environment in order to run their business systems in IT. In fact, IT and OT are now more connected than ever. An air gap does not support this business-critical connectivity.
Let’s take the case of the Colonial Pipeline ransomware incident. The Darkside cybercrime group infected the IT environment with ransomware, effectively locking key business systems, including the billing system. The billing system relies on data from Colonial Pipeline’s OT environment to measure gas usage and bill customers. This data exchange from OT into IT is key to the financial operation of the business. An air gap would break this business-critical communication and therefore is not feasible.
As the ransomware rendered the billing system inoperable, Colonial Pipeline took the unprecedented step of disabling the gas pipeline, which services the southeastern United States, resulting in the most materially significant cyberattack in United States history.
OT Has Converged with IT, While IT Has Converged with the Cloud
Just as IT and OT have converged and can no longer be separated, so too has IT converged with the cloud. Remote working collaboration tools, cloud-based business management systems, and cloud data centres are the standard for IT in a post-pandemic world. In fact, for many modern organisations, the cloud is inseparable from IT. They have wholly merged.
Businesses are striving for more agile operations, lower costs, and greater customer satisfaction, and the cloud has been integral in many IT businesses achieving this.
In comparison to IT, OT is the last bastion of on-premises computing. There are no technical or cybersecurity reasons why the cloud cannot be used to transform the operations of OT. The primary limitation is a cultural one.
The cloud offers a massively scalable platform with efficiencies and capabilities that are difficult to match with in-house data centres. And OT is the literal heart of any industrial business. Why wouldn’t a company want to embrace the benefits of the cloud to extract maximum value from their most important business systems and data? There are untold benefits awaiting ….
Using Risk to Guide Cloud Usage
How can we begin to move the needle on cultural change within OT to embrace the cloud? A risk-based approach, combined with a focus on delivering transformational business outcomes, is our best bet.
When it comes to risk, there are two key types of data within OT, each with its own risk profile. They are primary control system data and telemetry data from internet of things (IoT) devices in the field.
Primary control system data has the ability to control or directly affect the OT environment and as a result, it is high risk. For example, in electricity distribution, it can be used to literally turn the power on or off, potentially resulting in life-or-death situations for both employees and critical care customers.
Alternatively, IoT telemetry is merely providing a real-time view into the operational environment from IoT sensors in the field and does not have control of the critical infrastructure. It is, therefore, a much lower risk. The IoT field-based sensors are collecting data about temperature, vibration, pressure or almost anything that can be measured to provide a real-time picture of how the physical world is operating. This data, when combined with the power of the cloud, will drive significant business outcomes that, to date, have not been realised.
There is a big difference in the risk posed by each of these data sources, and as such, the data should be handled differently based on risk. Primary control system data will likely remain on-premises for the foreseeable future, while IoT telemetry is low-risk enough to be handled in the cloud. Indeed, the sheer volume of IoT data and the insights available through machine learning will necessitate the use of cloud computing.
Embracing the Benefits of Cloud Computing for Industrial Environments
The benefits of embracing the cloud for low-risk data, such as IoT telemetry, are numerous:
Real-Time Visibility for Better Decision-Making
IoT sensors in the field generate a constant stream of data, which provides real-time visibility into industrial operations, whether that’s monitoring manufactured goods for defects or the voltage of electricity distribution networks.
Rich, real-time data allows for greater visibility and understanding of industrial environments, leading to better decision-making and increased operational efficiencies.
Predictive Maintenance for Higher Availability
Predictive maintenance uses IoT telemetry to monitor physical assets in the field for signs of abnormal behaviour that may indicate the asset is about to fail. For example, in manufacturing, knowing when critically important production machinery is about to fail allows the asset to be fixed just before failure. This results in a decrease in unplanned downtime, increasing plant efficiency and maximising the output of operational systems.
Better Customer Outcomes
Ultimately embracing the benefits of cloud computing to drive the efficiency and availability of industrial operations has a flow-on effect on the customer through reducing costs and increasing responsiveness.
Cybersecurity Uplift
One final benefit of embracing the cloud is increased cybersecurity and OT system availability. We know that cyberthreats to OT environments are increasing, and an incident within an OT environment (or in the case of Colonial Pipeline, within an IT environment) can affect the availability of business-critical OT systems and services.
Cloud-enhanced cybersecurity systems provide an immediate maturity uplift to best secure these critical operational environments. Should a threat actor gain access to OT, their actions cannot be predicted or controlled and are likely to result in unplanned outages and impact industrial business operations.
The data used by these next-generation security systems is primarily network and endpoint telemetry, also known as metadata, which is akin to IoT telemetry and is equally low-risk.
Securing an OT environment with cloud-enhanced cybersecurity systems reduces the likelihood of malicious activities taking place, further protecting the availability of key OT systems.
A Secure OT Environment Is Also an Available OT Environment
The digital transformation that IT has realised through embracing the cloud is also waiting for OT. More efficient operations, better insights and decision-making, and higher availability of key industrial systems are just a few of the benefits.
It’s time for OT to move past any cultural inhibitors and use risk and business value as drivers for their cloud transformation.
Article link: https://www.paloaltonetworks.com/cybersecurity-perspectives/the-air-gap-is-dead
Oracle has reportedly laid off more than 3,000 employees at the electronic healthcare records firm Cerner which it acquired for $28.4 billion, according to an Insider report.

acle paused raises and promotions and laid off thousands of employees in the unit as recently as this month after the acquisition closed in June last year.
The Cerner acquisition had brought in about 28,000 employees.
Oracle has not issued raises or granted promotions, and, earlier this year, announced that workers shouldn’t expect any through 2023.
Layoffs affected workers across teams, including marketing, engineering, accounting, legal, and product.
Oracle did not comment on the report.
The Cloud major is developing a national health records database.
According to Oracle’s Chairman and Chief Technology Officer Larry Ellison, the patient data would be anonymous until individuals give consent to share their information.
Cerner is a provider of digital information systems used within hospitals and health systems to enable medical professionals to deliver better healthcare to individual patients and communities.
Oracle’s new health records database will also involve the patient engagement system the company has been developing throughout the pandemic.
The Cloud major is also working on the patient engagement system’s ability to collect information from wearables and home diagnostic devices.
Article link; https://infotechlead.com/cloud/oracle-cuts-3000-jobs-at-electronic-healthcare-records-firm-cerner-78383

By ADAM MAZMANIANMAY 16, 2023 06:30 PM ET
The agency extended the contract for its EHR provider by one year, and put performance conditions in place.
The Department of Veterans Affairs extended its $10 billion contract with Oracle Cerner to deliver its electronic health record software, but added some accountability measures designed to improve performance on the stalled and troubled modernization program.
Dr. Neil Evans, who currently leads the Office of Electronic Health Records Modernization on an interim basis, said in a statement that the new contract “now includes stronger performance metrics and expectations” over multiple areas including reliability, responsiveness, interoperability with outside health care systems and with other VA applications.
The extension was inked, as scheduled, at the conclusion of its fifth year of a possible 10. VA opted to extend the contract for just a single year, rather the full 5-year period allowable under the terms of the 2018 agreement.
“VA will have the opportunity to review our progress and renegotiate again in a year if need be,” Evans said in a statement.
The new deal includes opportunities for penalties and redress if Oracle Cerner misses its targets over 28 performance metrics, including uptime, help desk speed and effectiveness and more.
The news comes as VA is in the midst of a pause of new deployments of the Oracle Cerner system, which currently is in service in just five clinical settings.
“The system has not delivered for veterans or VA clinicians to date, but we are stopping at nothing to get this right—and we will deliver the efficient, well-functioning system that veterans and clinicians deserve,” Evans said.
Top Republicans on the House Veterans Affairs Committee would certainly agree that the Oracle Cerner system isn’t working as hoped, but they have doubts that the new contract terms are the answer.
“While we appreciate that VA is starting to build accountability into the Oracle Cerner contract, the main questions we have about what will be different going forward remain unanswered,” Reps. Mike Bost (R-Ill.) and Matt Rosendale (R-Mont.) said in a joint statement. “We need to see how the division of labor between Oracle, VA and other companies is going to change and translate into better outcomes for veterans and savings for taxpayers. This shorter-term contract is an encouraging first step, but veterans and taxpayers need more than a wink and a nod that the project will improve.”
Bost, who chairs the Veterans Affairs Committee, and Rosendale, who leads a subcommittee in charge of oversight of VA tech, have introduced legislation to force Oracle Cerner to hit specific performance targets consistently under the threat of canceling the program and reverting to VA’s homegrown electronic health records system VistA.
Article link: https://www.nextgov.com/emerging-tech/2023/05/va-puts-oracle-cerner-short-leash-10b-health-records-contract/386450/
March, 2023
IDA document: D-33439
FFRDC: Systems and Analyses Center
Type: Documents
Division: Information Technology and Systems Division
This document provides a tabularized and shortened version of the National Cybersecurity Strategy (March 2023) along with analytical products that elucidate key themes and terms in the strategy, as well as an analysis of similarities to the May 2021 Executive Order about cybersecurity.

By FRANK KONKELMAY 15, 2023 02:58 PM ET
The growth reflects rising concern about the potential threat posed by fully realized quantum computers.
The global quantum cryptography market will be worth an estimated $500 million in 2023, but—much like the rapidly evolving technology itself—the market is expected to grow rapidly over the next half-decade, according to a forecast issued by Ireland-based research firm MarketsandMarkets.
Issued in May, the forecast expects the quantum cryptography market to increase at a compound annual growth rate of more than 40% over the next five years, topping $3 billion by 2028. The forecast defines quantum cryptography as “a method of securing communication that uses the principles of quantum mechanics” to secure communication channels and data.
While the market for quantum cryptography products is expected to grow rapidly in the coming years, it’s already a highly competitive arena, in part due to the technical complexity required to commercialize the technology. The market includes companies that develop quantum standards, quantum random number generators and quantum key distribution systems.
The market’s growth mirrors interest in quantum cryptography—and quantum computing in general—across the government. Last November, the Office of Management and Budget released a memooutlining the need for federal agencies to begin migrating to post-quantum cryptography to prepare for the onset of commercialized quantum computers. The Government Accountability Office, which acts as the investigative arm of Congress,recently offered fuel to the fire of concernover quantum computers, stating that true quantum computers could break traditional methods of encryption commonly in use by industry and government agencies.
Not coincidentally, the government segment accounts for the largest quantum cryptography market share over the next five years.
“With the increasing use of mobility, government bodies across the globe have progressively started using mobile devices to enhance workers’ productivity and improve the functioning of public sector departments,” the forecast states. “They must work on critical information, intelligence reports and other confidential data. It can help protect sensitive data from hackers and provide a secure platform for conducting transactions and exchanging information.”
Article link: https://www.nextgov.com/cybersecurity/2023/05/quantum-cryptography-market-exceed-3-billion-2028/386355/
Delos Prime May 13, 2023

In a bold stroke, the EU’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models. The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe. While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.
Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.
If enacted, enforcement would be out of the hands of EU member states. Under the AI Act, third parties could sue national governments to compel fines. The act has extraterritorial jurisdiction. A European government could be compelled by third parties to seek conflict with American developers and businesses.
The Amended AI Act
The PDF of the actual text is 144 pages. The actual text provisions follow a different formatting style from American statutes. This thing is a complicated pain to read. I’ve added the page numbers of the relevant sections in the linked pdf of the law.
Here are the main provisions:
Very Broad Jurisdiction: The act includes “providers and deployers of AI systems that have their place of establishment or are located in a third country, where either Member State law applies by virtue of public international law or the output produced by the system is intended to be used in the Union.” (pg 68-69).
You have to register your “high-risk” AI project or foundational model with the government. Projects will be required to register the anticipated functionality of their systems. Systems that exceed this functionality may be subject to recall. This will be a problem for many of the more anarchic open-source projects. Registration will also require disclosure of data sources used, computing resources (including time spent training), performance benchmarks, and red teaming. (pg 23-29).
Expensive Risk Testing Required. Apparently, the various EU states will carry out “third party” assessments in each country, on a sliding scale of fees depending on the size of the applying company. Tests must be benchmarks that have yet to be created. Post-release monitoring is required (presumably by the government). Recertification is required if models show unexpected abilities. Recertification is also required after any substantial training. (pg 14-15, see provision 4 a for clarity that this is government testing).
Risks Very Vaguely Defined: The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy? Could this act itself be a risk to democracy? (pg 26).
Open Source LLMs Not Exempt: Open source foundational models are not exempt from the act. The programmers and distributors of the software have legal liability. For other forms of open source AI software, liability shifts to the group employing the software or bringing it to market. (pg 70).
API Essentially Banned. API’s allow third parties to implement an AI model without running it on their own hardware. Some implementation examples include AutoGPT and LangChain. Under these rules, if a third party, using an API, figures out how to get a model to do something new, that third party must then get the new functionality certified.
The prior provider is required, under the law, to provide the third party with what would otherwise be confidential technical information so that the third party can complete the licensing process. The ability to compel confidential disclosures means that startup businesses and other tinkerers are essentially banned from using an API, even if the tinkerer is in the US. The tinkerer might make their software available in Europe, which would give rise to a need to license it and compel disclosures. (pg 37).
Open Source Developers Liable. The act is poorly worded. The act does not cover free and Open Source AI components. Foundational Models (LLMs) are considered separate from components. What this seems to mean is that you canoOpen source traditional machine learning models but not generative AI.
If an American Opensource developer placed a model, or code using an API on GitHub – and the code became available in the EU – the developer would be liable for releasing an unlicensed model. Further, GitHub would be liable for hosting an unlicensed model. (pg 37 and 39-40).
LoRA Essentially Banned. LoRA is a technique to slowly add new information and capabilities to a model cheaply. Opensource projects use it as they cannot afford billion-dollar computer infrastructure. Major AI models are also rumored to use it as training in both cheaper and easier to safety check than new versions of a model that introduce many new features at once. (pg 14).
If an Opensource project could somehow get the required certificates, it would need to recertify every time LoRA was used to expand the model.
Deployment Licensing. Deployers, people, or entities using AI systems, are required to undergo a stringent permitting review project before launch. EU small businesses are exempt from this requirement. (pg 26).
Ability of Third Parties to Litigate. Concerned third parties have the right to litigate through a country’s AI regulator (established by the act). This means that the deployment of an AI system can be individually challenged in multiple member states. Third parties can litigate to force a national AI regulator to impose fines. (pg 71).
Very Large Fines. Fines for non-compliance range from 2% to 4% of a companies gross worldwide revenue. For individuals that can reach €20,000,0000. European based SME’s and startups get a break when it comes to fines. (Pg 75).
R&D and Clean Energy Systems In The EU Are Exempt. AI can be used for R&D tasks or clean energy production without complying with this system. (pg 64-65).
AI Act and US Law
The broad grant of extraterritorial jurisdiction is going to be a problem. The AI Act would let any crank with a problem about AI – at least if they are EU citizens – force EU governments to take legal action if unlicensed models were somehow available in the EU. That goes very far beyond simply requiring companies doing business in the EU to comply with EU laws.
The top problem is the API restrictions. Currently, many American cloud providers do not restrict access to API models, outside of waiting lists which providers are rushing to fill. A programmer at home, or an inventor in their garage, can access the latest technology at a reasonable price. Under the AI Act restrictions, API access becomes complicated enough that it would be restricted to enterprise-level customers.
What the EU wants runs contrary to what the FTC is demanding. For an American company to actually impose such restrictions in the US would bring up a host of anti-trust problems. Model training costs limit availability to highly capitalized actors. The FTC has been very frank that they do not want to see a repeat of the Amazon situation, where a larger company uses its position to secure the bulk of profits for itself – at the expense of smaller partners. Acting in the manner the AI Act seeks, would bring up major anti-trust issues for American companies.
Outside of the anti-trust provisions, the AI Acts’ punishment of innovation represents a conflict point. For American actors, finding a new way to use software to make money is a good thing. Under the EU Act finding a new way to use software voids the safety certification, requiring a new licensing process. Disincentives to innovation are likely to cause friction given the statute’s extraterritorial reach.
Finally, the open source provisions represent a major problem. The AI Act treats open source developers working on or with foundational models as bad actors. Developers and, seemingly, distributors are liable for releasing unlicensed foundation models – of apparently foundation model enhancing code. For all other forms of Opensource machine learning, the responsibility for licensing falls to whoever is deploying the system.
Trying to sanction parts of the tech ecosystem is a bad idea. Opensource developers are unlikely to respond well to being told by a government that they can’t program something – especially if the government isn’t their own. Additionally, what happens if GitHub and the various co-pilots simply say that Europe is too difficult to deal with and shut down access? That may have repercussions that have not been thoroughly thought through.
Defects of the Act
To top everything off, the AI Act appears to encourage unsafe AI. It seeks to encourage narrowly tailored systems. We know from experience – especially with social media – that such systems can be dangerous. Infamously, many social media algorithms only look at the engagement value of content. They are structurally incapable of judging the effect of the content. Large language models can at least be trained that pushing violent content is bad. From an experience standpoint, the foundational models that the EU is afraid of are safer than the models they are driving.
This is a deeply corrupt piece of legislation. If you are afraid of large language models, then you need to be afraid of them in all circumstances. Giving R&D models a pass shows that you are less than serious about your legislation. The most likely effect of such a policy is to create a society where the elite have access to R&D models, and nobody else – including small entrepreneurs – does.
I suspect this law will pass, and I suspect the EU will find that they have created many more problems than they anticipated. That’s unfortunate as some of the regulations, especially relating to algorithms used by large social networks, do need addressing.
Article link: https://technomancers.ai/eu-ai-act-to-target-us-open-source-software/
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
By Will Douglas Heaven May 2, 2023

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.
Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.
At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say.
The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. “I’m getting too old to do technical work that requires remembering lots of details,” he told me. “I’m still okay, but I’m not nearly as good as I was, and that’s annoying.”
But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster.
Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”
That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”
Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.
“These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”
Foundations
Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.
It took until the 2010s for the power of neural networks trained via backpropagation to truly make an impact. Working with a couple of graduate students, Hinton showed that his technique was better than any others at getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models.
One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT. “We got the first inklings that this stuff could be amazing,” says Hinton. “But it’s taken a long time to sink in that it needs to be done at a huge scale to be good.” Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols, such as words or numbers.
But Hinton wasn’t convinced. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be made to learn.
“My father was a biologist, so I was thinking in biological terms,” says Hinton. “And symbolic reasoning is clearly not at the core of biological intelligence.
“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”
A new intelligence
For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.”
Hinton’s fears will strike many as the stuff of science fiction. But here’s his case.
As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
Compared with brains, neural networks are widely believed to be bad at learning: it takes vast amounts of data and energy to train them. Brains, on the other hand, pick up new ideas and skills quickly, using a fraction as much energy as neural networks do.
“People seemed to have some kind of magic,” says Hinton. “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”
Hinton is talking about “few-shot learning,” in which pretrained neural networks, such as large language models, can be trained to do something new given just a few examples. For example, he notes that some of these language models can string a series of logical statements together into an argument even though they were never trained to do so directly.
Compare a pretrained large language model with a human in the speed of learning a task like that and the human’s edge vanishes, he says.
What about the fact that large language models make so much stuff up? Known as “hallucinations” by AI researchers (though Hinton prefers the term “confabulations,” because it’s the correct term in psychology), these errors are often seen as a fatal flaw in the technology. The tendency to generate them makes chatbots untrustworthy and, many argue, shows that these models have no true understanding of what they say.
Hinton has an answer for that too: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.”
The difference is that humans usually confabulate more or less correctly, says Hinton. To Hinton, making stuff up isn’t the problem. Computers just need a bit more practice.
We also expect computers to be either right or wrong—not something in between. “We don’t expect them to blather the way people do,” says Hinton. “When a computer does that, we think it made a mistake. But when a person does that, that’s just the way people work. The problem is most people have a hopelessly wrong view of how people work.”
Of course, brains still do many things better than computers: drive a car, learn to walk, imagine the future. And brains do it on a cup of coffee and a slice of toast. “When biological intelligence was evolving, it didn’t have access to a nuclear power station,” he says.
But Hinton’s point is that if we are willing to pay the higher costs of computing, there are crucial ways in which neural networks might beat biology at learning. (And it’s worth pausing to consider what those costs entail in terms of energy and carbon.)
Learning is just the first string of Hinton’s argument. The second is communicating. “If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy,” he says. “But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”
What does all this add up to? Hinton now thinks there are two types of intelligence in the world: animal brains and neural networks. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.”
That’s a huge claim. But AI is a polarized field: it would be easy to find people who would laugh in his face—and others who would nod in agreement.
People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”
Which is Hinton? “I’m mildly depressed,” he says. “Which is why I’m scared.”
How it could all go wrong
Hinton fears that these tools are capable of figuring out ways to manipulate or kill humans who aren’t prepared for the new technology.
“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”
He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars.
“Look, here’s one way it could all go wrong,” he says. “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”
Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?
“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”
There are already a handful of experimental projects, such as BabyAGI and AutoGPT, that hook chatbots up with other programs such as web browsers or word processors so that they can string together simple tasks. Tiny steps, for sure—but they signal the direction that some people want to take this tech. And even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.
“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?”
Maybe not. But Yann LeCun, Meta’s chief AI scientist, agrees with the premise but does not share Hinton’s fears. “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” says LeCun. “It’s a question of when and how, not a question of if.”
But he takes a totally different view on where things go from there. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” says LeCun. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”
“Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”
Yoshua Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms, feels more agnostic. “I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about,” he says. But fear is only useful if it kicks us into action, he says: “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”
Just look up
One of Hinton’s priorities is to try to work with leaders in the technology industry to see if they can come together and agree on what the risks are and what to do about them. He thinks the international ban on chemical weapons might be one model of how to go about curbing the development and use of dangerous AI. “It wasn’t foolproof, but on the whole people don’t use chemical weapons,” he says.
Bengio agrees with Hinton that these issues need to be addressed at a societal level as soon as possible. But he says the development of AI is accelerating faster than societies can keep up. The capabilities of this tech leap forward every few months; legislation, regulation, and international treaties take years.
This makes Bengio wonder whether the way our societies are currently organized—at both national and global levels—is up to the challenge. “I believe that we should be open to the possibility of fairly different models for the social organization of our planet,” he says.
Does Hinton really think he can get enough people in power to share his concerns? He doesn’t know. A few weeks ago, he watched the movie Don’t Look Up, in which an asteroid zips toward Earth, nobody can agree what to do about it, and everyone dies—an allegory for how the world is failing to address climate change.
“I think it’s like that with AI,” he says, and with other big intractable problems as well. “The US can’t even agree to keep assault rifles out of the hands of teenage boys,” he says.
Hinton’s argument is sobering. I share his bleak assessment of people’s collective inability to act when faced with serious threats. It is also true that AI risks causing real harm—upending the job market, entrenching inequality, worsening sexism and racism, and more. We need to focus on those problems. But I still can’t make the jump from large language models to robot overlords. Perhaps I’m an optimist.
When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door.
Be sure to tune in to Will Douglas Heaven’s live interview with Hinton at EmTech Digital on Wednesday, May 3, at 1:30 Eastern time. Tickets are available from the event website.
Plus: Geoffrey Hinton tells us why he’s now scared of the tech he helped build.
By Melissa Heikkilä May 2, 2023

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
This week’s big news is that Geoffrey Hinton, a VP and Engineering Fellow at Google, and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years.
But first, we need to talk about consent in AI.
Last week, OpenAI announced it is launching an “incognito” mode that does not save users’ conversation history or use it to improve its AI language model ChatGPT. The new feature lets users switch off chat history and training and allows them to export their data. This is a welcome move in giving people more control over how their data is used by a technology company.
OpenAI’s decision to allow people to opt out comes as the firm is under increasing pressure from European data protection regulators over how it uses and collects data. OpenAI had until yesterday, April 30, to accede to Italy’s requests that it comply with the GDPR, the EU’s strict data protection regime. Italy restored access to ChatGPT in the country after OpenAI introduced a user opt out form and the ability to object to personal data being used in ChatGPT. The regulator had argued that OpenAI has hoovered people’s personal data without their consent, and hasn’t given them any control over how it is used.
In an interview last week with my colleague Will Douglas Heaven, OpenAI’s chief technology officer, Mira Murati, said the incognito mode was something that the company had been “taking steps toward iteratively” for a couple of months and had been requested by ChatGPT users. OpenAI told Reuters its new privacy features were not related to the EU’s GDPR investigations.
“We want to put the users in the driver’s seat when it comes to how their data is used,” says Murati. OpenAI says it will still store user data for 30 days to monitor for misuse and abuse.
But despite what OpenAI says, Daniel Leufer, a senior policy analyst at the digital rights group Access Now, reckons that GDPR—and the EU’s pressure—has played a role in forcing the firm to comply with the law. In the process, it has made the product better for everyone around the world.
“Good data protection practices make products safer [and] better [and] give users real agency over their data,” he said on Twitter.
A lot of people dunk on the GDPR as an innovation-stifling bore. But as Leufer points out, the law shows companies how they can do things better when they are forced to do so. It’s also the only tool we have right now that gives people some control over their digital existence in an increasingly automated world.
Other experiments in AI to grant users more control show that there is clear demand for such features.
Since late last year, people and companies have been able to opt out of having their images included in the open-source LAION data set that has been used to train the image-generating AI model Stable Diffusion.
Since December, around 5,000 people and several large online art and image platforms, such as Art Station and Shutterstock, have asked to have over 80 million images removed from the data set, says Mat Dryhurst, who cofounded an organization called Spawning that is developing the opt-out feature. This means that their images are not going to be used in the next version of Stable Diffusion.
Dryhurst thinks people should have the right to know whether or not their work has been used to train AI models, and that they should be able to say whether they want to be part of the system to begin with.
“Our ultimate goal is to build a consent layer for AI, because it just doesn’t exist,” he says.
Deeper Learning
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. MIT Technology Review’s senior AI editor Will Douglas Heaven met Hinton at his house in north London just four days before the bombshell announcement that he is quitting Google.
Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.
And oh boy did he have a lot to say. “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he told Will. “How do we survive that?” Read more from Will Douglas Heaven here.
Even Deeper Learning
A chatbot that asks questions could help you spot when it makes no sense
AI chatbots like ChatGPT, Bing, and Bard often present falsehoods as facts and have inconsistent logic that can be hard to spot. One way around this problem, a new study suggests, is to change the way the AI presents information.
Virtual Socrates: A team of researchers from MIT and Columbia University found that getting a chatbot to ask users questions instead of presenting information as statements helped people notice when the AI’s logic didn’t add up. A system that asked questions also made people feel more in charge of decisions made with AI, and researchers say it can reduce the risk of overdependence on AI-generated information. Read more from me here.
Bits and Bytes
Palantir wants militaries to use language models to fight wars
The controversial tech company has launched a new platform that uses existing open-source AI language models to let users control drones and plan attacks. This is a terrible idea. AI language models frequently make stuff up, and they are ridiculously easy to hack into. Rolling these technologies out in one of the highest-stakes sectors is a disaster waiting to happen. (Vice)
Hugging Face launched an open-source alternative to ChatGPT
HuggingChat works in the same way as ChatGPT, but it is free to use and for people to build their own products on. Open-source versions of popular AI models are on a roll—earlier this month Stability.AI, creator of the image generator Stable Diffusion, also launched an open-source version of an AI chatbot, StableLM.
How Microsoft’s Bing chatbot came to be and where it’s going next
Here’s a nice behind-the-scenes look at Bing’s birth. I found it interesting that to generate answers, Bing does not always use OpenAI’s GPT-4 language model but Microsoft’s own models, which are cheaper to run. (Wired)
AI Drake just set an impossible legal trap for Google
My social media feeds have been flooded with AI-generated songs copying the styles of popular artists such as Drake. But as this piece points out, this is only the start of a thorny copyright battle over AI-generated music, scraping data off the internet, and what constitutes fair use. (The Verge)