healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Quantum Cryptography Market to Exceed $3B by 2028 – Nextgov

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

By FRANK KONKELMAY 15, 2023 02:58 PM ET

The growth reflects rising concern about the potential threat posed by fully realized quantum computers.

The global quantum cryptography market will be worth an estimated $500 million in 2023, but—much like the rapidly evolving technology itself—the market is expected to grow rapidly over the next half-decade, according to a forecast issued by Ireland-based research firm MarketsandMarkets.

Issued in May, the forecast expects the quantum cryptography market to increase at a compound annual growth rate of more than 40% over the next five years, topping $3 billion by 2028. The forecast defines quantum cryptography as “a method of securing communication that uses the principles of quantum mechanics” to secure communication channels and data.

While the market for quantum cryptography products is expected to grow rapidly in the coming years, it’s already a highly competitive arena, in part due to the technical complexity required to commercialize the technology. The market includes companies that develop quantum standards, quantum random number generators and quantum key distribution systems.

The market’s growth mirrors interest in quantum cryptography—and quantum computing in general—across the government. Last November, the Office of Management and Budget released a memooutlining the need for federal agencies to begin migrating to post-quantum cryptography to prepare for the onset of commercialized quantum computers. The Government Accountability Office, which acts as the investigative arm of Congress,recently offered fuel to the fire of concernover quantum computers, stating that true quantum computers could break traditional methods of encryption commonly in use by industry and government agencies.

Not coincidentally, the government segment accounts for the largest quantum cryptography market share over the next five years.

“With the increasing use of mobility, government bodies across the globe have progressively started using mobile devices to enhance workers’ productivity and improve the functioning of public sector departments,” the forecast states. “They must work on critical information, intelligence reports and other confidential data. It can help protect sensitive data from hackers and provide a secure platform for conducting transactions and exchanging information.”

Article link: https://www.nextgov.com/cybersecurity/2023/05/quantum-cryptography-market-exceed-3-billion-2028/386355/

EU AI Act To Target US Open Source Software

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

Delos Prime May 13, 2023

In a bold stroke, the EU’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models.  The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe.  While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.

Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue.  Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.

If enacted, enforcement would be out of the hands of EU member states. Under the AI Act, third parties could sue national governments to compel fines. The act has extraterritorial jurisdiction. A European government could be compelled by third parties to seek conflict with American developers and businesses.

The Amended AI Act

The PDF of the actual text is 144 pages.  The actual text provisions follow a different formatting style from American statutes.  This thing is a complicated pain to read.  I’ve added the page numbers of the relevant sections in the linked pdf of the law. 

Here are the main provisions:

Very Broad Jurisdiction:  The act includes “providers and deployers of AI systems that have their place of establishment or are located in a third country, where either Member State law applies by virtue of public international law or the output produced by the system is intended to be used in the Union.” (pg 68-69).

You have to register your “high-risk” AI project or foundational model with the government.  Projects will be required to register the anticipated functionality of their systems.  Systems that exceed this functionality may be subject to recall.  This will be a problem for many of the more anarchic open-source projects.  Registration will also require disclosure of data sources used, computing resources (including time spent training), performance benchmarks, and red teaming. (pg 23-29).

Expensive Risk Testing Required.  Apparently, the various EU states will carry out “third party” assessments in each country, on a sliding scale of fees depending on the size of the applying company.  Tests must be benchmarks that have yet to be created.  Post-release monitoring is required (presumably by the government).  Recertification is required if models show unexpected abilities.  Recertification is also required after any substantial training.  (pg 14-15, see provision 4 a for clarity that this is government testing).

Risks Very Vaguely Defined:  The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy?  Could this act itself be a risk to democracy? (pg 26).

Open Source LLMs Not Exempt:  Open source foundational models are not exempt from the act.  The programmers and distributors of the software have legal liability.  For other forms of open source AI software, liability shifts to the group employing the software or bringing it to market.  (pg 70).

API Essentially Banned.  API’s allow third parties to implement an AI model without running it on their own hardware.  Some implementation examples include AutoGPT and LangChain.  Under these rules, if a third party, using an API, figures out how to get a model to do something new, that third party must then get the new functionality certified. 

The prior provider is required, under the law, to provide the third party with what would otherwise be confidential technical information so that the third party can complete the licensing process.  The ability to compel confidential disclosures means that startup businesses and other tinkerers are essentially banned from using an API, even if the tinkerer is in the US.  The tinkerer might make their software available in Europe, which would give rise to a need to license it and compel disclosures. (pg 37).

Open Source Developers Liable.  The act is poorly worded.  The act does not cover free and Open Source AI components.  Foundational Models (LLMs) are considered separate from components.  What this seems to mean is that you canoOpen source traditional machine learning models but not generative AI.

If an American Opensource developer placed a model, or code using an API on GitHub – and the code became available in the EU – the developer would be liable for releasing an unlicensed model.  Further, GitHub would be liable for hosting an unlicensed model.  (pg 37 and 39-40).

LoRA Essentially Banned.  LoRA is a technique to slowly add new information and capabilities to a model cheaply.  Opensource projects use it as they cannot afford billion-dollar computer infrastructure.  Major AI models are also rumored to use it as training in both cheaper and easier to safety check than new versions of a model that introduce many new features at once.  (pg 14).

If an Opensource project could somehow get the required certificates, it would need to recertify every time LoRA was used to expand the model. 

Deployment Licensing.  Deployers, people, or entities using AI systems, are required to undergo a stringent permitting review project before launch.  EU small businesses are exempt from this requirement. (pg 26).

Ability of Third Parties to Litigate.  Concerned third parties have the right to litigate through a country’s AI regulator (established by the act).  This means that the deployment of an AI system can be individually challenged in multiple member states.  Third parties can litigate to force a national AI regulator to impose fines. (pg 71).

Very Large Fines.  Fines for non-compliance range from 2% to 4% of a companies gross worldwide revenue.  For individuals that can reach €20,000,0000.  European based SME’s and startups get a break when it comes to fines. (Pg 75).

R&D and Clean Energy Systems In The EU Are Exempt.  AI can be used for R&D tasks or clean energy production without complying with this system. (pg 64-65).

AI Act and US Law

The broad grant of extraterritorial jurisdiction is going to be a problem.  The AI Act would let any crank with a problem about AI – at least if they are EU citizens – force EU governments to take legal action if unlicensed models were somehow available in the EU.  That goes very far beyond simply requiring companies doing business in the EU to comply with EU laws.

The top problem is the API restrictions.  Currently, many American cloud providers do not restrict access to API models, outside of waiting lists which providers are rushing to fill.  A programmer at home, or an inventor in their garage, can access the latest technology at a reasonable price.  Under the AI Act restrictions, API access becomes complicated enough that it would be restricted to enterprise-level customers.

What the EU wants runs contrary to what the FTC is demanding.  For an American company to actually impose such restrictions in the US would bring up a host of anti-trust problems.  Model training costs limit availability to highly capitalized actors.  The FTC has been very frank that they do not want to see a repeat of the Amazon situation, where a larger company uses its position to secure the bulk of profits for itself – at the expense of smaller partners.  Acting in the manner the AI Act seeks, would bring up major anti-trust issues for American companies. 

Outside of the anti-trust provisions, the AI Acts’ punishment of innovation represents a conflict point.  For American actors, finding a new way to use software to make money is a good thing.  Under the EU Act finding a new way to use software voids the safety certification, requiring a new licensing process.  Disincentives to innovation are likely to cause friction given the statute’s extraterritorial reach.

Finally, the open source provisions represent a major problem.  The AI Act treats open source developers working on or with foundational models as bad actors.  Developers and, seemingly, distributors are liable for releasing unlicensed foundation models – of apparently foundation model enhancing code.  For all other forms of Opensource machine learning, the responsibility for licensing falls to whoever is deploying the system.

Trying to sanction parts of the tech ecosystem is a bad idea.  Opensource developers are unlikely to respond well to being told by a government that they can’t program something – especially if the government isn’t their own.  Additionally, what happens if GitHub and the various co-pilots simply say that Europe is too difficult to deal with and shut down access?  That may have repercussions that have not been thoroughly thought through.

Defects of the Act

To top everything off, the AI Act appears to encourage unsafe AI.  It seeks to encourage narrowly tailored systems.  We know from experience – especially with social media – that such systems can be dangerous.  Infamously, many social media algorithms only look at the engagement value of content.  They are structurally incapable of judging the effect of the content.  Large language models can at least be trained that pushing violent content is bad.  From an experience standpoint, the foundational models that the EU is afraid of are safer than the models they are driving.

This is a deeply corrupt piece of legislation.  If you are afraid of large language models, then you need to be afraid of them in all circumstances.  Giving R&D models a pass shows that you are less than serious about your legislation.  The most likely effect of such a policy is to create a society where the elite have access to R&D models, and nobody else – including small entrepreneurs – does.

I suspect this law will pass, and I suspect the EU will find that they have created many more problems than they anticipated. That’s unfortunate as some of the regulations, especially relating to algorithms used by large social networks, do need addressing.

Article link: https://technomancers.ai/eu-ai-act-to-target-us-open-source-software/

Geoffrey Hinton tells us why he’s now scared of the tech he helped build – MIT Technology Review

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.


“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

By Will Douglas Heaven May 2, 2023

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.  

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.

At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say.

The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. “I’m getting too old to do technical work that requires remembering lots of details,” he told me. “I’m still okay, but I’m not nearly as good as I was, and that’s annoying.”

But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster.  

Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”

That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”

Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.    

“These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”

Foundations

Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.

It took until the 2010s for the power of neural networks trained via backpropagation to truly make an impact. Working with a couple of graduate students, Hinton showed that his technique was better than any others at getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models.

One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT. “We got the first inklings that this stuff could be amazing,” says Hinton. “But it’s taken a long time to sink in that it needs to be done at a huge scale to be good.” Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols, such as words or numbers.

But Hinton wasn’t convinced. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be made to learn.

“My father was a biologist, so I was thinking in biological terms,” says Hinton. “And symbolic reasoning is clearly not at the core of biological intelligence.

“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”

A new intelligence

For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.”

Hinton’s fears will strike many as the stuff of science fiction. But here’s his case. 

As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

Compared with brains, neural networks are widely believed to be bad at learning: it takes vast amounts of data and energy to train them. Brains, on the other hand, pick up new ideas and skills quickly, using a fraction as much energy as neural networks do.  

“People seemed to have some kind of magic,” says Hinton. “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”

Hinton is talking about “few-shot learning,” in which pretrained neural networks, such as large language models, can be trained to do something new given just a few examples. For example, he notes that some of these language models can string a series of logical statements together into an argument even though they were never trained to do so directly.

Compare a pretrained large language model with a human in the speed of learning a task like that and the human’s edge vanishes, he says.

What about the fact that large language models make so much stuff up? Known as “hallucinations” by AI researchers (though Hinton prefers the term “confabulations,” because it’s the correct term in psychology), these errors are often seen as a fatal flaw in the technology. The tendency to generate them makes chatbots untrustworthy and, many argue, shows that these models have no true understanding of what they say.  

Hinton has an answer for that too: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.”

The difference is that humans usually confabulate more or less correctly, says Hinton. To Hinton, making stuff up isn’t the problem. Computers just need a bit more practice.  

We also expect computers to be either right or wrong—not something in between. “We don’t expect them to blather the way people do,” says Hinton. “When a computer does that, we think it made a mistake. But when a person does that, that’s just the way people work. The problem is most people have a hopelessly wrong view of how people work.”

Of course, brains still do many things better than computers: drive a car, learn to walk, imagine the future. And brains do it on a cup of coffee and a slice of toast. “When biological intelligence was evolving, it didn’t have access to a nuclear power station,” he says.   

But Hinton’s point is that if we are willing to pay the higher costs of computing, there are crucial ways in which neural networks might beat biology at learning. (And it’s worth pausing to consider what those costs entail in terms of energy and carbon.)

Learning is just the first string of Hinton’s argument. The second is communicating. “If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy,” he says. “But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”

What does all this add up to? Hinton now thinks there are two types of intelligence in the world: animal brains and neural networks. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.”

That’s a huge claim. But AI is a polarized field: it would be easy to find people who would laugh in his face—and others who would nod in agreement. 

People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”

Which is Hinton? “I’m mildly depressed,” he says. “Which is why I’m scared.”

How it could all go wrong

Hinton fears that these tools are capable of figuring out ways to manipulate or kill humans who aren’t prepared for the new technology.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars.

“Look, here’s one way it could all go wrong,” he says. “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”

There are already a handful of experimental projects, such as BabyAGI and AutoGPT, that hook chatbots up with other programs such as web browsers or word processors so that they can string together simple tasks. Tiny steps, for sure—but they signal the direction that some people want to take this tech. And even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.

“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?”

Maybe not. But Yann LeCun, Meta’s chief AI scientist, agrees with the premise but does not share Hinton’s fears. “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” says LeCun. “It’s a question of when and how, not a question of if.”

But he takes a totally different view on where things go from there. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” says LeCun. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”

“Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”

Yoshua Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms, feels more agnostic. “I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about,” he says. But fear is only useful if it kicks us into action, he says: “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”

Just look up

One of Hinton’s priorities is to try to work with leaders in the technology industry to see if they can come together and agree on what the risks are and what to do about them. He thinks the international ban on chemical weapons might be one model of how to go about curbing the development and use of dangerous AI. “It wasn’t foolproof, but on the whole people don’t use chemical weapons,” he says.

Bengio agrees with Hinton that these issues need to be addressed at a societal level as soon as possible. But he says the development of AI is accelerating faster than societies can keep up. The capabilities of this tech leap forward every few months; legislation, regulation, and international treaties take years.

This makes Bengio wonder whether the way our societies are currently organized—at both national and global levels—is up to the challenge. “I believe that we should be open to the possibility of fairly different models for the social organization of our planet,” he says.

Does Hinton really think he can get enough people in power to share his concerns? He doesn’t know. A few weeks ago, he watched the movie Don’t Look Up, in which an asteroid zips toward Earth, nobody can agree what to do about it, and everyone dies—an allegory for how the world is failing to address climate change.

“I think it’s like that with AI,” he says, and with other big intractable problems as well. “The US can’t even agree to keep assault rifles out of the hands of teenage boys,” he says.

Hinton’s argument is sobering. I share his bleak assessment of people’s collective inability to act when faced with serious threats. It is also true that AI risks causing real harm—upending the job market, entrenching inequality, worsening sexism and racism, and more. We need to focus on those problems. But I still can’t make the jump from large language models to robot overlords. Perhaps I’m an optimist.

When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door.

Be sure to tune in to Will Douglas Heaven’s live interview with Hinton at EmTech Digital on Wednesday, May 3, at 1:30 Eastern time. Tickets are available from the event website.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/amp/

We need to bring consent to AI – MIT Technology Review

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.


Plus: Geoffrey Hinton tells us why he’s now scared of the tech he helped build.

By Melissa Heikkilä May 2, 2023

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This week’s big news is that Geoffrey Hinton, a VP and Engineering Fellow at Google, and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years.

But first, we need to talk about consent in AI. 

Last week, OpenAI announced it is launching an “incognito” mode that does not save users’ conversation history or use it to improve its AI language model ChatGPT. The new feature lets users switch off chat history and training and allows them to export their data. This is a welcome move in giving people more control over how their data is used by a technology company. 

OpenAI’s decision to allow people to opt out comes as the firm is under increasing pressure from European data protection regulators over how it uses and collects data. OpenAI had until yesterday, April 30, to accede to Italy’s requests that it comply with the GDPR, the EU’s strict data protection regime. Italy restored access to ChatGPT in the country after OpenAI introduced a user opt out form and the ability to object to personal data being used in ChatGPT. The regulator had argued that OpenAI has hoovered people’s personal data without their consent, and hasn’t given them any control over how it is used.

In an interview last week with my colleague Will Douglas Heaven, OpenAI’s chief technology officer, Mira Murati, said the incognito mode was something that the company had been “taking steps toward iteratively” for a couple of months and had been requested by ChatGPT users. OpenAI told Reuters its new privacy features were not related to the EU’s GDPR investigations. 

“We want to put the users in the driver’s seat when it comes to how their data is used,” says Murati. OpenAI says it will still store user data for 30 days to monitor for misuse and abuse.  

But despite what OpenAI says, Daniel Leufer, a senior policy analyst at the digital rights group Access Now, reckons that GDPR—and the EU’s pressure—has played a role in forcing the firm to comply with the law. In the process, it has made the product better for everyone around the world. 

“Good data protection practices make products safer [and] better [and] give users real agency over their data,” he said on Twitter. 

A lot of people dunk on the GDPR as an innovation-stifling bore. But as Leufer points out, the law shows companies how they can do things better when they are forced to do so. It’s also the only tool we have right now that gives people some control over their digital existence in an increasingly automated world.

Other experiments in AI to grant users more control show that there is clear demand for such features. 

Since late last year, people and companies have been able to opt out of having their images included in the open-source LAION data set that has been used to train the image-generating AI model Stable Diffusion. 

Since December, around 5,000 people and several large online art and image platforms, such as Art Station and Shutterstock, have asked to have over 80 million images removed from the data set, says Mat Dryhurst, who cofounded an organization called Spawning that is developing the opt-out feature. This means that their images are not going to be used in the next version of Stable Diffusion. 

Dryhurst thinks people should have the right to know whether or not their work has been used to train AI models, and that they should be able to say whether they want to be part of the system to begin with.  

“Our ultimate goal is to build a consent layer for AI, because it just doesn’t exist,” he says.

Deeper Learning 

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. MIT Technology Review’s senior AI editor Will Douglas Heaven met Hinton at his house in north London just four days before the bombshell announcement that he is quitting Google.

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.

And oh boy did he have a lot to say. “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he told Will. “How do we survive that?” Read more from Will Douglas Heaven here. 

Even Deeper Learning

A chatbot that asks questions could help you spot when it makes no sense

AI chatbots like ChatGPT, Bing, and Bard often present falsehoods as facts and have inconsistent logic that can be hard to spot. One way around this problem, a new study suggests, is to change the way the AI presents information. 

Virtual Socrates: A team of researchers from MIT and Columbia University found that getting a chatbot to ask users questions instead of presenting information as statements helped people notice when the AI’s logic didn’t add up. A system that asked questions also made people feel more in charge of decisions made with AI, and researchers say it can reduce the risk of overdependence on AI-generated information. Read more from me here. 

Bits and Bytes

Palantir wants militaries to use language models to fight wars
The controversial tech company has launched a new platform that uses existing open-source AI language models to let users control drones and plan attacks. This is a terrible idea. AI language models frequently make stuff up, and they are ridiculously easy to hack into. Rolling these technologies out in one of the highest-stakes sectors is a disaster waiting to happen. (Vice) 

Hugging Face launched an open-source alternative to ChatGPT
HuggingChat works in the same way as ChatGPT, but it is free to use and for people to build their own products on. Open-source versions of popular AI models are on a roll—earlier this month Stability.AI, creator of the image generator Stable Diffusion, also launched an open-source version of an AI chatbot, StableLM.   

How Microsoft’s Bing chatbot came to be and where it’s going next
Here’s a nice behind-the-scenes look at Bing’s birth. I found it interesting that to generate answers, Bing does not always use OpenAI’s GPT-4 language model but Microsoft’s own models, which are cheaper to run. (Wired) 

AI Drake just set an impossible legal trap for Google
My social media feeds have been flooded with AI-generated songs copying the styles of popular artists such as Drake. But as this piece points out, this is only the start of a thorny copyright battle over AI-generated music, scraping data off the internet, and what constitutes fair use. (The Verge)

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/05/02/1072556/we-need-to-bring-consent-to-ai/amp/

Forget Them Not

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

NIST Debuts New Cyber Guidance for Contractors Handling Sensitive Data – Nextgov

Posted by timmreardon on 05/20/2023
Posted in: Uncategorized.

The National Institute of Standards and Technology is accepting comments on the revised document through July 14.

Updates to federal guidelines for protecting sensitive, unclassified information were unveiled yesterday, emphasizing clarifications in security requirements to better safeguard critical data. 

Published by the National Institute of Standards and Technology, the revised draft changes impacted NIST SP 800-171 Rev.3, which is intended to help federal contractors understand how to protect Controlled Unclassified Information that they may handle when working with government entities. 

Changes in the draft guidance for CUIinclude removing ambiguity and defining parameters in implementing cybersecurity protocols; increasing flexibility in selected security requirements; and assisting organizations to mitigate risk.

“Many of the newly added requirements specifically address threats to CUI, which recently has been a target of state-level espionage,” said Ron Ross, one of the publication’s authors and a NIST fellow. “We want to implement and maintain state-of-the-practice defenses because the threat space is changing constantly. We tried to express those requirements in a way that shows contractors what we do and why in federal cybersecurity. There’s more useful detail now with less ambiguity.”

Some of the digital assets NIST’s document covers include personal health information, critical energy infrastructure data and intellectual property. Safeguarding data that contributes to the U.S.’s critical infrastructures has been a chief priority for federal, state and local governments amid growing digital threats, with NIST a key player in helping fortify the nation’s digital security.

“Protecting CUI, including intellectual property, is critical to the nation’s ability to innovate—with far-reaching implications for our national and economic security,” Ross said. “We need to have safeguards that are sufficiently strong to do the job.”

NIST is accepting public feedback on the draft guidance until July 14. NIST stated that it anticipated introducing one more draft version of the SP 800-171 Rev. 3 before publishing a final version in 2024.

Article link: https://www.nextgov.com/cybersecurity/2023/05/nist-debuts-new-cyber-guidance-contractors-handling-sensitive-data/386233/

Pentagon to transfer 5G efforts to CIO, establish O-RAN pilots – DefenseScoop

Posted by timmreardon on 05/13/2023
Posted in: Uncategorized.

On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, DOD CIO John Sherman said. 

BYMIKAYLA EASLEY MAY 11, 2023

The Pentagon’s chief information officer will take the reins of the department’s efforts to operationalize 5G communications technology for warfighters this fall, CIO John Sherman said Thursday. 

For the last few years, the Office of the Undersecretary of Defense for Research and Engineering has been working to adopt 5G and future-generation wireless network technologies across the department. On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, Sherman announced at the DefenseTalks conference hosted by DefenseScoop..

“We’ve already been working left-seat and right-seat with research and engineering on this,” he said. “But we’ve got the lead as of Oct. 1 on the 5G pilots that are underway at the numerous DOD installations.”

In 2020, the Pentagon awarded contracts to multiple prime contractors to set up 5G and “FutureG” test bed projects at different military bases across the country. Each site experiments with a different way the department can utilize the technology, including creating smart warehouses enabled by 5G and bi-directional spectrum sharing.

The Pentagon’s chief information officer will take the reins of the department’s efforts to operationalize 5G communications technology for warfighters this fall, CIO John Sherman said Thursday. 

For the last few years, the Office of the Undersecretary of Defense for Research and Engineering has been working to adopt 5G and future-generation wireless network technologies across the department. On Oct. 1, the CIO’s organization will take the lead on those efforts and expand the scope of current 5G testing and experimentation, Sherman announced at the DefenseTalks conference hosted by DefenseScoop..

“We’ve already been working left-seat and right-seat with research and engineering on this,” he said. “But we’ve got the lead as of Oct. 1 on the 5G pilots that are underway at the numerous DOD installations.”

In 2020, the Pentagon awarded contracts to multiple prime contractors to set up 5G and “FutureG” test bed projects at different military bases across the country. Each site experiments with a different way the department can utilize the technology, including creating smart warehouses enabled by 5G and bi-directional spectrum sharing.

But adopting O-RAN technology could be key for the Pentagon’s 5G and FutureG efforts, as it would allow the department to “break open that black box into different components,” he said. 

Not only would the Pentagon’s focus on O-RAN bring new competition to the market and incentivize new innovation, the open architecture approach would also allow the department to experiment with new features in wireless communications, such as zero-trust security features, he noted.

“It’s going to change the game,” Rondeau said.

Article link: https://defensescoop.com/2023/05/11/dod-5g-oran-pilot/?

The three National Defense Science and Technology Strategy lines-of-effort – OUSD Research and Engineering

Posted by timmreardon on 05/12/2023
Posted in: Uncategorized.

As we set out to build an insurmountable S&T lead over ambitious competitors, the DoD will lean on the core principles outlined in the National Defense Science and Technology Strategy and three lines-of-efforts as the cornerstones of our work. The LOEs – a focus on the joint mission, creating and fielding capabilities at speed and scale, and ensuring the foundations for research and development – will guide the capabilities we explore, our strategic investments, and our science and technology research priorities.

A Focus on the Joint Mission

The National Defense Strategy emphasizes that future military operations must be joint, and directs us to deliver appropriate, asymmetrical capabilities to the Joint Force. Recognizing that we operate in a resource-constrained environment, the DoD will take a rigorous, analytical approach to our investments, and avoid getting mired in wasteful technology races with our competitors. Consistent joint experimentation will accelerate our capacity to convert the joint warfighting concept to capabilities.

Create and Field Capabilities at Speed and Scale

Technological innovations and cutting-edge capabilities are useless to the Joint Force when they languish in research labs. We cannot let the perfect be the enemy of the good, and we cannot let outdated processes prevent collaboration. To succeed, the DoD will need to pursue new and novel ways to bridge the valleys of death in defense innovation. We will foster a more vibrant defense innovation ecosystem, strengthening collaboration and communication with allies and non-traditional partners in industry and academia. We will operate with a sense of urgency and pursue the continuous transition of capabilities to our warfighters.

Ensuring the Foundations for Research and Development

To thrive in the era of competition, we must provide those at the forefront of S&T world-class with tools and facilities, and create an environment in which the DoD is the first choice for the world’s brightest scientific minds. By enhancing our physical and digital infrastructure, creating robust professional development programs for the workforce of today, while investing in pipelines for the workforce of tomorrow, we will ensure America is ready for the challenges of decades to come.

Article link: https://www.linkedin.com/pulse/three-national-defense-science-technology-strategy-lines-of-effort

Setting Emerging Tech’s Standards Dominates NIST’s 2024 Goals – Nextgov

Posted by timmreardon on 05/12/2023
Posted in: Uncategorized.

By ALEXANDRA KELLEYMAY 10, 2023

NIST Director Laurie Locascio discussed her agency’s plans before a House hearing, revealing major focuses on critical and emerging technologies.

Emerging technologies feature heavily in the FY2024 budget request from the National Institute of Standards and Technology, 

Director Laurie Locascio discussed her agency’s budget plans before the House Committee on Science, Space and Technology Wednesday morning, covering a wide range of spending plans, including improving outdated research facilities, boosting domestic manufacturing efforts, investing in cybersecurity education and developing guidance for emerging tech systems.

A total of $995 million in science and technical funding was requested for the agency’s 2024 budget, including $68.7 million intended for new research programs. Among the emerging technologies NIST intends to focus on developing throughout the next year are artificial intelligence systems, quantum technologies and biotechnology.

“It is essential that we remain in a strong leadership position as these technologies are major drivers of economic growth,” Locascio testified.

Chief among the priority areas for AI will be developing standards and benchmarks to further aid in the technologies’ responsible development. Part of this will involve working with ally nations to foster a shared set of technical standards that promote common understanding of how AI systems should––and should not––be used. This process is crucial to keeping barriers in the international trade ecosystem low.

“The budget will provide new resources to expand the capabilities for benchmarking and evaluation of AI systems to ensure that the US can lead in AI innovation, while ensuring that we responsibly address the risks of this rapidly developing technology,” she said. 

Increasing public involvement is another pillar in NIST’s strategy to help cultivate a roadmap surrounding trustworthy AI development. Locascio said that in 2024, NIST aims to apply the Artificial Intelligence Risk Management Framework’s provisions to gauge risks associated with generative AI systems like ChatGPT, as well as create a public working group to provide input specifically on the generative branch of AI technologies.

Locascio also talked about the agency’s plans for quantum information sciences. $220 million from the FY2024 proposed budget is intended for research specifically in the quantum technology field, focusing on fundamental measurement research and post-quantum cryptography development.

“We have a number of different activities that we’re doing but related to workforce development of the new cybersecurity framework and security for post quantum encryption algorithms,” she confirmed.

NIST also aims to work on developing similar standards for nascent biotechnologies. She highlighted gene editing as one of the standout topics within the biotechnology field, and discussed ongoing private sector engagement with NIST to guide the U.S. biotech sector on its product development, such as antibody-based treatments.

“Our goal is to make sure that when you’re editing the genome, you know what you did…and…you know how to anticipate the outcome,” Locascio said. 

Emerging and critical technologies have steadily risen as a priority item within the Biden administration’s docket. Last week, the White House unveiled its inaugural National Standards Strategy––developed with the help of NIST––to promote its leadership in regulating how these technologies will be used in a domestic and global context. 

“We’re really at a place where we need to be proactive in critical and emerging technologies, make sure that we are at the table promoting U.S. innovation and our competitive technologies and bring them to the table in the international standards forum, and represent leadership positions there as well,” Locascio said. “So NIST is really at the forefront of that.”

Article link: https://www.nextgov.com/emerging-tech/2023/05/setting-emerging-techs-standards-dominates-nists-2024-goals/386199/

Subcommittee on VA Technology Modernization Oversight Hearing

Posted by timmreardon on 05/11/2023
Posted in: Uncategorized.

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
    • Agentic AI, explained – MIT Sloan 02/18/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (6)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...