“Amongst the novel objects that attracted my attention during my stay in the United States, nothing nothing struck me more forcibly than the general equality of conditions. I readily discovered the prodigious influence which this primarily fact exercises on the whole course of society, by giving a certain direction to public opinion, and a certain tenor to the laws; while imparting new maximums to the governing powers, and peculiar habits to the governed. I speedily perceived that the influence of this fact extends far beyond the political character and laws of the country, and that it has no less empire over civil society than over the Government; it creates opinions, engenders sentiments, suggests the ordinary practices of life, and modifies whatever it does not produce. The more I advanced in the study of American society, the more I perceived that the equality of conditions is the fundamental fact from which all others seem to be derived, and the central point at which all my observations constantly terminated.”
WASHINGTON — Imagine a militarized version of ChatGPT, trained on secret intelligence. Instead of painstakingly piecing together scattered database entries, intercepted transmissions and news reports, an analyst types in a quick query in plain English and get back, in seconds, a concise summary — a prediction of hostile action, for example, or a profile of a terrorist.
But is that output true? With today’s technology, you can’t count on it, at all.
That’s the potential and the peril of “generative” AI, which can create entirely new text, code or images rather than just categorizing and highlighting existing ones. Agencies like the CIA and State Departmenthave already expressed interest. But for now, at least, generative AI has a fatal flaw: It makes stuff up.
“I’m excited about the potential,” said Lt. Gen. (ret.) Jack Shanahan, founding director of the Pentagon’s Joint Artificial Intelligence Center (JAIC) from 2018 to 2020. “I play around with Bing AI, I use ChatGPT pretty regularly — [but] there is no intelligence analyst right now that would use these systems in any way other than with a hefty grain of salt.”
“This idea of hallucinations is a major problem,” he told Breaking Defense, using the term of art for AI answers with no foundation in reality. “It is a showstopper for intelligence.”
Shanahan’s successor at JAIC, recently retired Marine Corps Lt. Gen. Michael Groen, agreed. “We can experiment with it, [but] practically it’s still years away,” he said.
Instead, Shahanan and Groen told Breaking Defense that, at this point, the Pentagon should swiftly start experimenting with generative AI — with abundant caution and careful training for would-be users — with an eye to seriously using the systems when and if the hallucination problem can be fixed.
To see the current issues with popular generative AI models, just ask the AI about Shanahan and Groen themselves, public figures mentionedin Wikipedia and manynews reports.
Then-Lt. Gen. John “Jack” Shanahan
On its first attempt, ChatGPT 3.5 correctly identified Shanahan as a retired Air Force officer, AI expert, and former director of both the groundbreaking Project Maven and the JAIC. But it also said he was a graduate of the Air Force Academy, a fighter pilot, and an advisor to AI startup DeepMind — none of which is true. And almost every date in the AI-generated bio was off, some by nearly a decade.
“Wow,” Shanahan said of the bot’s output. “Many things are not only wrong, but seemingly out of left field… I’ve never had any association with DeepMind.”
What about the upgraded ChatGPT 4.0? “Almost as bad!” he said. This time, in addition to a different set of wrong jobs and fake dates, he was given two children that did not exist. Weirder yet, when you enter the same question into either version a second time, it generates a new and different set of errors.
Nor is the problem unique to ChatGPT. Google’s Bard AI, built on a different AI engine, did somewhat better, but it still said Shanahan was a pilot and made up assignments he never had.
“The irony is that it could pull my official AF bio and get everything right the first time,” Shanahan said.
And Groen? “The answers about my views on AI are pretty good,” the retired Marine told Breaking Defense after reviewing ChatGPT 3.5’s first effort, which emphasized his enthusiasm for military AI tempered by stringent concern for ethics. “I suspect that is because there are not too many people with my name that have publicly articulated their thoughts on this topic.”
On the other hand, Groen went on, “many of the facts of my biography are incorrect, [e.g.] it got place of birth, college attended, year entered service wrong. It also struggled with units I have commanded or was a part of.”
How could generative AI get the big picture right but mess up so many details? It’s not that the data isn’t out there: As with Shanahan, Groen’s official military bio is online.
But so is a lot of other information, he pointed out. “I suspect that there are so many ‘Michaels’ and ‘Marine Michaels’ on the global internet that the ‘pattern’ that emerged contains elements that are credible, but mostly incorrect,” Groen said.
Then-Lt. Gen. Michael Groen, director of the Joint AI Center, briefs reporters in 2020. (screenshot of DoD video)
This tendency to conflate general patterns and specific facts might explain the AIs’ insistence that Shanahan is an Air Force Academy grad and fighter pilot. Neither of those things is true of him as an individual, but they are commonly mentioned attributes of Air Force officers as a group. It’s not true, but it’s plausible — and this kind of AI doesn’t remember a database of specific facts, only a set of statistical correlations between different words.
“The system does not know facts,” Shanahan said. “It really is a sophisticated word predictor.”
The companies behind the tech each acknowledge and warn that their AI apps may produce incorrect information and urge user caution. The missteps in AI chatbots are not dissimilar to those of generative AI artists, like Stable Diffusion, which produced the image of a mutant Abrams below.
Output from a generative AI, Stable Diffusion, when prompted by Breaking Defense with “M1 Abrams tank.”
What Goes Wrong
In the cutting-edge Large Language Models that drive ChatGPT, Bard, and their ilk, “each time it generates a new word, it is assigning a sort of likelihood score to every single word that it knows,” explained Micah Musser, a research analyst at Georgetown University’s Center for Security and Emerging Technology. “Then, from that probability distribution, it will select — somewhat at random – one of the more likely words.”
That’s why asking the same question more than once gets subtly or even starkly different answers every time. And while training the AI on larger datasets can help, Musser told Breaking Defense, “even if it does have sufficient data, it does have sufficient context, if you ask a hyper-specific question and it hasn’t memorized the [specific] example, it may just make something up.” Hence the plausible but invented dates of birth, graduation, and so on for Shanahan and Groen.
Now, it’s understandable when human brains misremember facts and even invent details that don’t exist. But surely a machine can record the data perfectly?
Not these machines, said Dimitry Fisher, head of data science and applied research at AI company Aicadium. “They don’t have memory in the same sense that we do,” he told Breaking Defense. “They cannot quote sources… They cannot show what their output has been inspired by.”
Ironically, Fisher told Breaking Defense, earlier attempts to teach AIs natural language did have distinct mechanisms for memorizing specific facts and inferring general patterns, much like the human brains that inspired them. But such software ran too slowly on any practical hardware to be of much use, he said. So instead the industry shifted to a type of AI called a transformer — that’s the “T” in “ChatGPT” — which only encodes the probable correlations between words or other data points.
“It just predicts the most likely next word, one word at a time,” Fisher said. “You can’t have language generation on the industrial scale without having to take this architectural shortcut, but this is where it comes back and bites you.”
These issues should be fixable, Fisher said. “There are many good ideas of how to try to and solve them – but that’ll take probably a few years.”
Shanahan, likewise, was guardedly optimistic, if only because of the financial incentives to get it right.
“These flaws are so serious that the big companies are going to spend a lot of money and a lot of time trying to fix things,” he said. “How well those fixes will work remains the unanswered question.”
The Combined Air Operations Center (CAOC) at Al Udeid Air Base, Qatar, (US Air Force photo by Staff Sgt. Alexander W. Riedel)
If It Works…
If generative AI can be made reliable — and that’s a significant if — the applications for the Pentagon, as for the private sector, are extensive, Groen and Shanahan agreed.
“Probably the places that make the most sense in the near term… are those back-office business from personnel management to budgeting to logistics,” Shanahan said. But in longer term, “there is an imperative to use them to help deal with … the entire intelligence cycle.”
So while the hallucinations have to be fixed, Shanahan said, “what I’m more worried about, in the immediate term, is just the fact that I don’t see a whole lot of action in the government about using these systems.” (He did note that the Pentagon Chief Digital & AI Office, which absorbed the JAIC, has announced an upcoming conferenceon generative AI: “That’s good.”). Instead of waiting for others to perfect the algorithms, he said, “I’d rather get them in the hands of users, put some boundaries in place about how they can be used… and then focus really heavy on the education, the training, and the feedback” from users on how they can be improved.
Groen was likewise skeptical about the near term. “We don’t want to be in the cutting edge here in generative AI,” he said, saying near-term implementation should focus on tech “that we know and that we trust and that we understand.” But he was even more enthused about the long term than Shanahan.
“What’s different with ChatGPT, suddenly you have this interface, [where] using the English language, you can ask it questions,” Groen said. “It democratizes AI [for] large communities of people.”
That’s transformative for the vast majority of users who lack the technical training to translate their queries into specialized search terms. What’s more, because the AI can suck up written information about any subject, it can make connections across different disciplines in a way a more specialized AI cannot.
“What makes generative AI special is it can understand multiple narrow spaces and start to make integrative conclusions across those,” Groen said. “It’s… able to bridge.”
But first, Groen said, you want the strong foundations to build the bridge across, like pilings in the river. So while experimenting with ChatGPT and co., Groen said, he would put the near-term emphasis on training and perfecting traditional, specialist AIs, then layer generative AI over top of them when it’s ready.
“There are so many places today in the department where it’s screaming for narrow AI solutions to logistics inventories or distribution optimizers, or threat identification … like office automation, not like killer robots,” Groen said. “Getting all these narrow AIs in place and building these data environments actually really prepares us for — at some point — integrating generative AI.”
Quantum sensors can modernize the U.S. electrical grid through on-site technology with the long-term aim of supporting climate resilience.
Quantum sensing research and development is one of the Department of Energy’s priorities, according to an agency official, as the devices show promise for electrical grid efficiency and sustainability efforts.
Rima Oueid, a senior commercialization executive in Energy’s Office of Technology Transitions, discussed with Nextgov the agency’s larger goals surrounding implementing quantum information science and technology, or QIST, in existing infrastructure, emphasizing the myriad benefits of quantum sensor application.
“We are looking at quantum sensors that can be utilized for monitoring the grid, for anomaly detection and making the grid more resilient to climate change,” Oueid said.
She specified that quantum sensors––a quantum information technology currently used in Magnetic Resonance Imaging machines and atomic clocks––can report more precise data upon which critical infrastructure relies.
Critical infrastructure, including the U.S. electrical grid, uses global positioning technologies to send positioning, navigation and timing information in order to operate. Oueid said that quantum sensors have the power to report PNT data directly from the electrical grid rather than from satellite-based GPS sources.
“What we’re realizing now is that there are different types of quantum sensors that we can also now use for timing that could be deployed directly on the grid…as opposed to depending on GPS,” she said. “We’re hoping that we get to a place where we don’t need the satellite communication, that we would have these quantum sensors distributed.”
If quantum sensors supplant GPS PNT information, they could enable GPS-denied areas access to the grid. Oueid said this will be critical as the electrical grid takes on more energy generation and storage systems, but that infrastructure security stands to benefit as well, since satellite interference wouldn’t be a concern.
“If we have quantum sensors instead distributed…where they need to be, then it’s harder to disrupt the system,” she said.
Eventually, the goal is to fully incorporate distributed energy resources—namely wind, solar and electric vehicles—into the country’s central electrical grid to act as energy assets. Oueid conceded that certain market forces will need to align with Energy’s efforts to spur widespread electric vehicle adoption and integration, but quantum sensors can use PNT data to help signal to vehicles when renewables are readily available on the grid to charge their batteries.
Past optimizing the nation’s electrical grid for better usage of renewable energy, quantum sensors are currently being studied for their potential to track climate change with more precise algorithms, as well as conducting subsurface level exploration to find potential underground carbon repositories, in an effort to reduce fracking activity.
“The possibilities are amazing,” Oueid said. “There’s a lot of different use cases that could help us make a system smarter and more efficient to help reduce climate change concerns.”
Coalition for Health AI Details Framework Focused on Care Impact, Ethics, and Equity of Health AI Tools
Bedford, Mass., April 4, 2023—The Coalition for Health AI (CHAI) released its highly anticipated “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare” (Blueprint). The Blueprint addresses the quickly evolving landscape of health AI tools by outlining specific recommendations to increase trustworthiness within the healthcare community, ensure high-quality care, and meet healthcare needs. The 24-page guide reflects a unified effort among subject matter experts from leading academic medical centers and the healthcare, up technology, and other industry sectors, who collaborated under the observation of several federal agencies over the past year.
“Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians,” said Dr. Brian Anderson, a co-founder of the coalition and chief digital health physician at MITRE. “The CHAI Blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care.”
“The successful implementation and impact of AI technology in healthcare hinges on our commitment to responsible development and deployment,” said Eric Horvitz, chief scientific officer at Microsoft and CHAI co-founder. “I am truly inspired by the incredible dedication, intelligence, and teamwork that led to the creation of the Blueprint.”
CHAI, NATIONAL ACADEMY OF MEDICINE COLLABORATE ON CODE OF CONDUCT
The National Academy of Medicine’s (NAM’s) AI Code of Conduct effort is designed to align health, healthcare, and biomedical science around a broadly adopted code of conduct in AI to ensure responsible AI that assures equitable benefit for all. The NAM effort will inform CHAI’s future efforts, which will provide robust best-practice technical guidance, including assurance labs and implementation guides to enable clinical systems to apply the Code of Conduct.
CHAI’s technical focus will help to inform and clarify areas that will need to be addressed in NAM’s Code of Conduct. The work and final deliverables of these projects are mutually reinforcing and coordinated to establish a code of conduct and technical framework for health AI assurance.
“We have a rare window of opportunity in this early phase of AI development and deployment to act in harmony—honoring, reinforcing, and aligning our efforts nationwide to assure responsible AI. The challenge is so formidable and the potential so unprecedented. Nothing less will do,” said Laura L. Adams, senior advisor, National Academy of Medicine.
BLUEPRINT FOR AI BILL OF RIGHTS
The Blueprint builds upon the White House OSTP “Blueprint for an AI Bill of Rights” and the “AI Risk Management Framework (AI RMF 1.0)” from the U.S. Department of Commerce’s National Institute of Standards and Technology. OSTP acts as a federal observer to CHAI, as do the Agency for Healthcare Research and Quality, Centers for Medicare & Medicaid Services, U.S. Food and Drug Administration, Office of the National Coordinator for Health Information Technology, and National Institutes of Health.
“The needs of all patients must be foremost in this effort. In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology. Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The Blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia, and industry,” said Dr. John Halamka, president, Mayo Clinic Platform, and a co-founder of the coalition.
“The CHAI Blueprint is the result of the kind of collaborative approach that’s essential for achieving diverse perspectives on issues affecting AI in medicine,” said Michael Pencina, Ph.D., a co-founder of the coalition and director of Duke AI Health. “And given our rapidly evolving understanding of the significant impacts of AI on health, health delivery, and equity, the fact that the Blueprint is designed to be a flexible ‘living document’ will enable us to maintain a continuous focus on these critically important dimensions of algorithmic healthcare.”
The Coalition for Health AI is a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science. Coalition members have come together to harmonize standards and reporting for health AI, and to educate end-users on how to evaluate these technologies to drive their adoption. Its mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase trustworthiness amongst users, and meet healthcare needs. Learn more at coalitionforhealthai.org.
Russian intelligence services, together with a Moscow-based IT company, are planning worldwide hacking operations that will also enable attacks on critical infrastructure facilities.
The release of thousands of pages of confidential documents has exposed Russian military and intelligence agencies’ grand plans for using their cyberwar capabilities in disinformation campaigns, hacking operations, critical infrastructure disruption, and control of the Internet.
The papers were leaked from the Russian contractor NTC Vulkan and show how Russian intelligence agencies use private companies to plan and execute global hacking operations. They include project plans, software descriptions, instructions, internal emails, and transfer documents from the company.
The takeover of railroad networks and power plants are also part of a training seminar held by Vulkan to train hackers.
The leak also exposes the company’s close links to the FSB, Russia’s domestic spy agency, the GOU and GRU, the respective operational and intelligence divisions of the armed forces, and the SVR, Russia’s foreign intelligence organization.
The documents, which were leaked by an unnamed source to a German reporter working for the Süddeutsche Zeitung at the start of Russia’s invasion of Ukraine, have since been analyzed by global media outlets including The Washington Post and German media outlets Paper Trail Media and Der Spiegel.
According to the Spiegel report (in German), Vulkan has developed tools that allow state hackers to efficiently prepare cyberattacks, filter Internet traffic, and spread propaganda and disinformation on a massive scale.
The Spiegel report notes that analysts from Google reportedly discovered a connection between Vulkan and the hacker group Cozy Bear years ago; the group has successfully penetrated systems of the US Department of Defense in the past.
Amezit, Skan-V Programs Revealed
One offensive cyber program described in the documents is internally codenamed “Amezit.”
The wide-ranging platform is designed to enable attacks on critical infrastructure facilities in addition to total information control over specific areas.
The program’s goals include using special software to derail trains or paralyze airport computers, but it was not clear from the materials whether the program is currently being used against Ukraine.
Another project, called “Skan-V,” is supposed to automate cyberattacks and make them much easier to plan.
Whether and where the programs were used cannot be traced, but the documents prove that the programs were ordered, tested, and paid for.
“People should know the dangers this poses,” shared the anonymous source who leaked the docs to the media. The Russian invasion of Ukraine had motivated the source to make the documents public.
As the Sandworm Turns
A trail also leads to the state hacker group Sandworm, one of the most dangerous advanced persistent threats (APTs) in the world, responsible for some of the most serious cyberattacks of recent years. For instance, the threat actor has been targeting the Ukrainian capital since as far back as December 2016 when it used the malware tool Industroyer to cause a temporary power outage in Kyiv.
Until now, it was not known that the group used tools from private companies.
Sandworm has previously been linked to GRU.
Since the start of the war, at least five Russian, state-sponsored or cybercriminal groups — including Gamaredon, Sandworm, and Fancy Bear — have targeted Ukrainian government agencies and private companies in dozens of operations that aimed to disrupt services or steal sensitive information.
The recommendations focus on executable actions for the Office of Management and Budget, agencies, Congress and industry
MITRE made 10 recommendations this week to help the federal government modernize legacy systems, citing that “significant numbers of critical federal information technology systems that provide vital support to agencies’ missions are operating with known security vulnerabilities and unsupported hardware and software.”
According to the March 29recommendations, the government must move away from legacy systems to fully leverage technology and fulfill critical missions. The recommendations focus on execution and were directed at the Office of Management and Budget, agencies, Congress and industry.
For example, MITRE suggests OMB provide guidance on legacy systems inventories and IT modernization plans in addition to progress transparency mechanisms. Congress should introduce legislation to reduce legacy IT and should make adjustments to the Federal Information Technology Acquisition Reform Act—or FITARA—scorecard by adding an IT Modernization Planning and Delivering category.
Meanwhile, agencies should develop inventories, modernization plans and budgets to support this, in addition to progress reports that detail acquisition and legacy system retirement. Lastly, industry should partner with the government to further facilitate these processes.
“If you look at our recommendations, there’s a premise here that it starts with putting in place some really good policies both in the executive branch and on the legislative side of the house,” said Dave Powner, executive director of MITRE’s Center for Data-Driven Policy and one of the authors of the recommendations. “So you can think of it as starting with OMB really requiring comprehensive modernization plans that focus on decommissioning some of these old systems and [then] that is backed with sound legislation. I think if you start with those policies in place from both sides—executive and legislative branches—that would be a good start to ensure that we’re all on the same page and marching to the same beat here.”
However, co-author Dr. Nitin Naik, a technical fellow at MITRE, noted that while policies and funding are necessary, there are other critical components to this process.
“You want to make sure that you have good implementation plans, and you need to have industry partnership because we don’t want the industry to continue to profess that the old technology can continue to meet the needs,” Naik said. “We want them to be an active participant to say, ‘Okay, let’s try to see how we can take this and bring it to the new industry standard.’”
Noting there have been several other efforts to modernize legacy systems—from previous bills to cyber budgets and the National Cybersecurity Strategy—Powner explained, “our recommendations really get at the execution of those plans.”
“And how do we execute those plans—having transparency mechanisms that you report progress, having industry as a key partner, thinking differently about the digital services team. And could agencies operate within their organizations, but also go to OMB or [a] central organization out of the White House for help?” Powner said. “And this is a collective game here—it’s the policymakers, the agencies and industry, all kind of collectively working together.”
For example, one recommendation suggests that agencies partner with industry, labs and federally-funded research and development centers to help with innovation and take advantage of new technologies, according to Naik.
Another recommendation for agencies focuses on using technology like artificial intelligence and automation to improve the modernization process.
And while many agencies have modernization plans in place, there can be several challenges.
“The question is, are [agencies’] IT modernization plans really getting at these mission critical legacy applications?” Powner said.
“All the systems we are talking about are delivering services 24/7, 365, so, you cannot have a stoppage of work in any way that will bring tremendous problems,” Naik said. “The question is—what is a good strategy to sort of start reengineering this, testing it out on the side, and then slowly one-by-one, migrating things over while it is an interconnected system.”
The Food and Drug Administration announced March 29 that it will begin to “refuse to accept” medical devices and related systems over cybersecurity reasons beginning Oct. 1. All new device submissions must include detailed cybersecurity plans beginning March 29.
As such, device manufacturers will need to submit plans to monitor, identify and address in a “reasonable timeframe” any determined post-market cybersecurity vulnerabilities and exploits, including coordinated vulnerability disclosures and plans.
Developers must now design and maintain procedures able to show, with reasonable assurance, “that the device and related systems are cybersecure” and create post-market updates and patches to the device and connected systems that address “on a reasonably justified regular cycle, known unacceptable vulnerabilities,” according to the guidance.
If discovered out-of-cycle, the manufacturer must also make public “critical vulnerabilities that could cause uncontrolled risks,” as soon as possible.
Submissions will also need to include a software bill of materials, which must contain all commercial, open-source, and off-the-shelf software components, while complying with other FDA requirements “to demonstrate reasonable assurance that the device and related systems are cybersecure.”
These plans should come as no surprise to device manufacturers, as they were included in the new authorities granted by the Consolidated Appropriations Act of 2023, which was signed into law on Dec. 29.
The law created “long desired FDA authorities” that were left out of previous resolutions and includes requirements for premarket submissions proposed by the Protecting and Transforming Cyber Health Care (PATCH) Act.
The December inclusion yielded overwhelming support from healthcare stakeholders, who’ve long requested federal support to curtail systemic challenges with securing medical devices. Healthcare delivery organizations have long borne the onus of securing the vast, complex device ecosystem, and even the most equipped health systems do not fully meet the task.
The December Omnibus included statements that required the FDA to take the actions announced March 29 within 90 days of the law’s passage. The final guidance titled “Cybersecurity in Medical Devices: Refuse to Accept Policy for Cyber Devices and Related Systems,” includes all requirements for new submissions.
The new cybersecurity requirements don’t apply to applications or submissions submitted to the FDA before March 29. And the “refuse to accept” decisions for premarket submissions based solely on cyber reasons will not go into effect until Oct. 1.
Rather, the FDA says it intends to “work collaboratively with sponsors of such premarket submissions as part of the interactive and/or deficiency review process.” The agency expects that cyber device sponsors “will have had sufficient time to prepare premarket submissions” to include the cyber requirements contained in the finalized guidance.
“And FDA may refuse to accept premarket submissions that do not,” according to its notice. A a medical device is considered a “cyber device” if it includes “software validated, installed, or authorized by the sponsor,” can connect to the internet, and contains any tech characteristics validated, installed, or authorized that could be vulnerable to cybersecurity threats.
The guidance did not go through the typical public comment period, as “prior public participation is not feasible or appropriate.” Officials added that “although this policy is being implemented immediately without prior comment, FDA will consider all comments received and revise the guidance document as appropriate.”
The voice of healthcare cybersecurity and policy for SC Media, CyberRisk Alliance, driving industry-specific coverage of what matters most to healthcare and continuing to build relationships with industry stakeholders.
The transformative changes brought by deep learning and artificial intelligence are accompanied by immense costs. For example, OpenAI’s ChatGPT algorithm costs at least $100,000 every day to operate. This could be reduced with accelerators, or computer hardware designed to efficiently perform the specific operations of deep learning. However, such a device is only viable if it can be integrated with mainstream silicon-based computing hardware on the material level.
This was preventing the implementation of one highly promising deep learningaccelerator—arrays of electrochemical random-access memory, or ECRAM—until a research team at the University of Illinois Urbana-Champaign achieved the first material-level integration of ECRAMs onto silicon transistors. The researchers, led by graduate student Jinsong Cui and professor Qing Cao of the Department of Materials Science & Engineering, recently reported an ECRAM device designed and fabricated with materials that can be deposited directly onto silicon during fabrication in Nature Electronics, realizing the first practical ECRAM-based deep learning accelerator.
“Other ECRAM devices have been made with the many difficult-to-obtain properties needed for deep learning accelerators, but ours is the first to achieve all these properties and be integrated with silicon without compatibility issues,” Cao said. “This was the last major barrier to the technology’s widespread use.”
ECRAM is a memory cell, or a device that stores data and uses it for calculations in the same physical location. This non-standard computing architecture eliminates the energy cost of shuttling data between the memory and the processor, allowing data-intensive operations to be performed very efficiently.
ECRAM encodes information by shuffling mobile ions between a gate and a channel. Electrical pulses applied to a gate terminal either inject ions into or draw ions from a channel, and the resulting change in the channel’s electrical conductivity stores information. It is then read by measuring the electric current that flows across the channel. An electrolyte between the gate and the channel prevents unwanted ion flow, allowing ECRAM to retain data as a non-volatile memory.
The research team selected materials compatible with silicon microfabrication techniques: tungsten oxide for the gate and channel, zirconium oxide for the electrolyte, and protons as the mobile ions. This allowed the devices to be integrated onto and controlled by standard microelectronics. Other ECRAM devices draw inspiration from neurological processes or even rechargeable battery technology and use organic substances or lithium ions, both of which are incompatible with silicon microfabrication.
In addition, the Cao group device has numerous other features that make it ideal for deep learning accelerators. “While silicon integration is critical, an ideal memory cell must achieve a whole slew of properties,” Cao said. “The materials we selected give rise to many other desirable features.”
Since the same material was used for the gate and channel terminals, injecting ions into and drawing ions from the channel are symmetric operations, simplifying the control scheme and significantly enhancing reliability. The channel reliably held ions for hours at time, which is sufficient for training most deep neural networks. Since the ions were protons, with the smallest ion, the devices switched quite rapidly. The researchers found that their devices lasted for over 100 million read-write cycles and were vastly more efficient than standard memory technology. Finally, since the materials are compatible with microfabrication techniques, the devices could be shrunk to the micro- and nanoscales, allowing for high density and computing power.
The researchers demonstrated their device by fabricating arrays of ECRAMs on silicon microchips to perform matrix-vector multiplication, a mathematical operation crucial to deep learning. Matrix entries (neural network weights) were stored in the ECRAMs, and the array performed the multiplication on the vector inputs, represented as applied voltages, by using the stored weights to change the resulting currents. This operation as well as the weight update was performed with a high level of parallelism.
“Our ECRAM devices will be most useful for AI edge-computing applications sensitive to chip size and energy consumption,” Cao said. “That’s where this type of device has the most significant benefits compared to what is possible with silicon-based accelerators.”
The researchers are patenting the new device, and they are working with semiconductor industry partners to bring this new technology to market. According to Cao, a prime application of this technology is in autonomous vehicles, which must rapidly learn their surrounding environment and make decisions with limited computational resources. Cao is collaborating with Illinois electrical & computer engineering faculty to integrate their ECRAMs with foundry-fabricated silicon chips and Illinois computer science faculty to develop software and algorithms taking advantage of ECRAM’s unique capabilities.
More information: Jinsong Cui et al, CMOS-compatible electrochemical synaptic transistor arrays for deep learning accelerators, Nature Electronics (2023). DOI: 10.1038/s41928-023-00939-7
Quick on your feet: think for a minute of your favorite athlete. Unless this person is a bodybuilder, strength and sheer power are only part of the story. For most sportsmen and women, real success on the playing field comes with a certain hard-to-teach nimbleness—the ability to quickly take in new information and adjust strategy to achieve a specific result. Part of the appeal of sports is the excitement that comes with constant change, and—discounting the vicissitudes of luck—the result comes down to how athletes apply their abilities in response.
Change is also a constant in business (and, yes, life). Agile, in business, is a way of working that seeks to go with the flow of inevitable change rather than work against it. The Agile Manifesto, developed in 2001 as a way of optimizing software development, prioritizes individuals over processes, working prototypes over thorough documentation, customer collaboration over closed-door meetings, and swift response to change over following a set plan. In the years since its inception, agile has conferred competitive advantage to the organizations that have applied it, in and out of the IT department.
As our business, social, economic, and political environments become increasingly volatile, the only way to meet the challenges of rapidly changing times is to change with them. Read on to learn more about agile and how to adopt an agile mindset.
Let’s go back to sports for a minute. Maybe you’re a great free throw shooter in basketball. You make nine out of ten shots when you’re by yourself in your driveway, shooting around for fun. But when you meet friends for a pickup game at a nearby court, your shot is off. People keep jumping in front of you when you’re trying to line up, and maybe the sun is shining in your eyes from an unfavorable angle. You make maybe a couple of shots the whole game. The following week, disappointed by your performance on the court, you decide to make a change. Rather than doubling your practice time shooting in your driveway, you mix up your routine, practicing your shot at a couple of different courts at different times of day. Maybe you also ask a friend to run some defense drills with you so you can get used to shooting under pressure. Maybe you add a layup to your shot practice. This is a shift to an agile approach—and increases the likelihood that you’ll perform better the next time you play pickup.
Traditional organizations are—much like you shooting free throws in your driveway—optimized to operate in static, siloed situations of structural hierarchy. Planning is linear, and execution is controlled. The organization’s skeletal structure is strong but frequently rigid and slow moving.
Agile organizations are different. They’re designed for rapid change. An agile organization is a technology-enabled network of teams with a people-centered culture that operates in rapid-learning and fast-decision cycles. Agility adds speed and adaptability to stability, creating a competitive advantage in uncertain conditions.
What is kanban and what is scrum?
Kanban and scrum are two organizational frameworks that fall under the umbrella of agile.
Kanban originated in the manufacturing plants of postwar Japan. Kanban, which is Japanese for “signboard,” was first developed to prioritize “just in time” delivery—that is, meeting demand rather than creating a surplus of products before they’re needed. With kanban, project managers create lanes of work that are required to deliver a product. A basic kanban board would have vertical lanes for processes—these processes could be “to do,” “doing,” “done,” and “deployed.” A product or assignment would move horizontally through the board.
The idea of scrum was invented by two of the original developers of agile methodology. A team of five to nine people is led by a scrum leader and product owner. The team sets its own commitments and engages in ceremonies like daily stand-up meetings and sprint planning, uniting in a shared goal.
Scrums, kanban, and other agile product management frameworks are not set in stone. They’re designed to be adapted and adjusted to fit the requirements of the project. One critical component of agile is the kaizenphilosophy—a pillar of the Toyota production model—which is one of continuous improvement. With agile methodologies, the point is to learn from each iteration and adjust the process based on what’s learned.
A team or organization of any size or industry can be agile. But regardless of the details, all agile groups have five things in common.
North Star embodied across the organizationAgile organizations set a shared purpose and vision for the organization that helps people feel personally invested—that’s a North Star. This helps align teams with sometimes wildly varied remits and processes.
Network of empowered teamsAgile organizations typically replace top-down structures with flexible, scalable networks of teams. Agile networks should operate with high standards of alignment, accountability, expertise, transparency, and collaboration. Regardless of the configuration of the network, team members should feel a sense of ownership over their work and see a clear connection between their work and the business’s North Star.
Rapid decision and learning cyclesAgile teams work in short cycles—or sprints—then learn from them by collecting feedback from users to apply to a future sprint. This quick-cycle model accelerates the pace throughout the organization, prioritizing quarterly cycles and dynamic management systems—such as objectives and key results (OKRs)—over annual planning.
Dynamic people model that ignites passionAn agile culture puts people at the center, seeking to create value for a much wider range of stakeholders, including employees, investors, partners, and communities. Making change personally meaningful for employees can build transformational momentum.
Next-generation enabling technologyRadical rethinking of an organizational model requires a fresh look at the technologies that enable processes. These include, for example, real-time communication and work management tools that support continually evolving operating processes.
Agility looks a little different for every organization. But the advantages in stability and dynamism that the above trademarks confer are the same—and are critical to succeeding in today’s rapidly changing competitive environment.
How should an organization implement an agile transformation?
According to a McKinsey survey on agile transformations, the best way to go about an agile transformation is for an entire organization to transition to agile, rather than just individual departments or teams. This is ambitious but possible: New Zealand–based digital-services and telecommunications company Spark NZ managed to flip the entire organization to an agile operating model in less than a year.
Any enterprise-wide agile transformation needs to be both comprehensive and iterative: comprehensive in the sense that it addresses strategy, structure, people, process, and technology and iterative in its acceptance that things will change along the way.
The first phase of an agile transformation involves designing and piloting the new agile operating model. This usually starts with building the top team’s understanding and aspirations, creating a blueprint for how agile will add value, and implementing pilots.
The second phase is about improving the process and creating more agile cells throughout the organization. Here, a significant amount of time is required from key leaders, as well as willingness to role model new mindsets. The best way to accomplish this phase is to recognize that not everything can be planned for, and implementation requires continuous measurement and adjustment.
Culture is a critical part of any agile transformation. Agile is a mindset; it’s not something an organization does—it’s something an organization is. Getting this transition right is key to overall success.
Agile is a mindset; it’s not something an organization does—it’s something an organization is.
Who should lead an agile transformation?
The single most important trait for the leader of an agile organization is an agile mindset, or inner agility. Simply put, inner agility is a comfortable relationship with change and uncertainty. And research has shown that a leader’s mindset, and how that mindset shapes organizational culture, can make or break a successful agile transformation.
Leaders must evolve new mindsets and behaviors.For most of us, the natural impulse is to react. Research shows that people spend most of their time in a reactive mindset—reacting to challenges, circumstances, or other people. Because of this natural tendency, traditional organizations were designed to run on the reactive.Agile organizations, by contrast, run on creative mindsets built on curiosity. A culture of innovation, collaboration, and value creation helps nurture the ability to flexibly respond to unexpected change. Creative mindsets also help members of an organization, at all levels, tap into their core passions and purposes.Roche, a legacy biotech company, recognized the importance of a mindset shift at the leadership level. When the organization decided to build an agile culture, it invited more than 1,000 leaders to a four-day immersion program designed to enable leaders to shift from a reactive mindset to a creative one. Today, agility has been widely deployed within Roche, engaging tens of thousands of people in applying agile mindsets.
Leaders must help teams work in new and more effective ways.Agility spells change for both leaders and their teams. Leaders need to give more autonomy and flexibility to their teams and trust them to do the right thing. For their part, teams should embrace a design-thinking mentality and build toward working more efficiently, assuming more responsibility for the outcomes of their projects and being more accountable to customers.
Leaders must cocreate an agile organizational purpose, design, and culture.A critical organization-level skill for leaders is the ability to distill and communicate a clear, shared, and compelling purpose, or North Star. Next, leaders need to design the strategy and operating model of the organization based on agile principles and practices. Finally, leaders need to shape a new culture across the organization, based on creative mindsets of discovery, partnership, and abundance.
Inner agility can feel counterintuitive. Our impulse as humans is to simplify and solve problems by applying our expertise. But complex problems require complex solutions, and sometimes those solutions are beyond our expertise. Recognizing that our solutions aren’t working can feel like failure—but it doesn’t have to. To train themselves to address problems in a more agile way, leaders need to learn to think beyond their normal ways of solving problems.
Pause to move fasterThis can be tough for leaders used to moving quickly. But pausing in the middle of the fray can create space for clearer judgment, original thinking, and purposeful action. This can take many forms: one CEO McKinsey has worked with takes a ten-minute walk outside the office without his cell phone. Others do breathing exercises between meetings. These practices can help leaders interrupt habits to create space for something different.
Embrace your ignoranceBeing a know-it-all no longer works. The world is changing so fast that new ideas can come from anywhere. Competitors you’ve never heard of can suddenly reshape your industry. As change accelerates, listening and thinking from a place of not knowing is crucial to encouraging the discovery of original, surprising, breakthrough ideas.
Radically reframe the questionsWe too frequently interrogate our ideas, asking ourselves questions we already know the answers to—and worse, questions whose answers confirm what we already believe. Instead, seek to ask truly challenging, open-ended questions. Those types of questions allow your employees and stakeholders to creatively discuss and describe what they’re seeing, and potentially unblock existing mental frameworks.
Set direction, not destinationIn the increasing complexity of our era, solutions are rarely straightforward. Instead of setting a path from one point to another, share a purposeful vision with your team. Then join your team in heading toward a general goal, and in exploring and experimenting together to reach common goals.
Test your solutions—and yourselfIdeas may not work out as planned. But quick, cheap failures allow you to see what works and what doesn’t—and avoid major, costly disasters.
In times of stress, we often feel ourselves challenged. Rather than falling back on old habits, inner agility enables us to embrace complexity and use it to grow stronger.
Deliberate calm is a mindset that helps leaders keep a cool head during a crisis and steer their ships through a storm. It’s not something that comes naturally: in times of uncertainty, the human brain is wired to react rather than stay calm. The ability to step back and choose actions suited to a given situation is a skill that must be cultivated. In their 2022 book Deliberate Calm, McKinsey veterans Jacqueline Brassey, Aaron De Smet, and Michiel Kruyt describe their personal self-mastery practices to offer lessons in effective leadership through crises.
One important lesson? Not all crises are created equal. Inspired by the thinking of Harvard Business School professor Herman “Dutch” Leonard, De Smet differentiates between routine emergencies and crises of uncertainty. Routine emergencies can be dealt with using past experiences and training. But crises of uncertainty are different. In these moments, where you don’t know how deep the rabbit hole goes, you can’t fall back on what you know. “If you are in an uncertain situation,” says De Smet, “the most important thing you can do is calm down. Take a breath. Take stock. ‘Is the thing I’m about to do the right thing to do?’ And in many cases, the answer is no. If you were in a truly uncertain environment, if you’re in new territory, the thing you would normally do might not be the right thing.”
Establishing an agile transformation office (ATO) can help improve the odds that an agile transformation will be successful. Embedded within an existing organizational structure, an ATO shapes and manages the transformation, brings the organization along, and—crucially—helps it achieve lasting cultural change.
Agree on the ATO’s purpose and mandateAn ATO needs a purpose just as an agile organization needs a North Star. This step links an ATO specifically to the “why” of the transformation. An ATO’s mandate can include driving the transformation strategy, building capabilities, championing change, coaching senior leaders, managing interdependencies, and creating and refining best practices.
Define the ATO’s place within the organizationWhile an ATO’s reporting lines will depend on the organization, usually the leader of successful ATOs reports to the CEO or one of the CEO’s direct reports. This ensures tight alignment and support from top leadership.
Determine the ATO’s roles and responsibilitiesRegardless of an ATO’s size or mandate, the following core capabilities should be managed by a strong transformation leader:Execution leaders own the transformation road map, assessing and adjusting it on an ongoing basis.Methodology owners gather lessons from the transformation and refine and evolve agile practices and behaviors.Agile coaches guide teams through their transformations, helping to instill an agile culture and mindset.Change management and communications expertsmaintain lines of communication through periods of change.
As agile principles become the norm across industries, an ATO can help usher in an agile transformation regardless of the design choices an organization makes in setting it up.
Can remote teams practice agile?
Agile teams typically rely on the camaraderie, community, and trust made possible by co-location. The remote work necessitated by the COVID-19 crisis has put this idea—along with many other assumptions about work and office culture—to the test.
While the shift was sudden, talented agile teams working through the crisis proved that productivity can be maintained with the right technology in place. Here are some targeted actions agile leaders can take to recalibrate their processes and sustain an agile culture with remote teams.
Revisit team normsTools to ease teams into remote work abound—these include virtual whiteboards, instant chat, and videoconferencing. But they still represent a change from the in-person tools agile teams usually rely on for ceremonies. Team members need to help one another quickly get up to speed on how best to shift to virtual. Teams also need to make extra effort to capture the collective view, a special challenge when working remotely.
Cultivate bonding and moraleIn the absence of in-person bonding activities, like lunches or spontaneous coffees, team members can bond virtually, such as by showing each other around their homes on a video call, introducing pets or family members, or sharing music or other personal interests. Being social is important in the virtual space, as well as in person, to nurture team cohesion.
Adapt coaching and developmentTeam leaders who normally do one-on-one coaching over coffee should transition as seamlessly as possible to remote coaching—coffee and all.
Establish a single source of truthIn person, agile team processes are usually informal. Teams make decisions with everyone in the room, so there’s not usually much need to record these discussions. In the virtual space, however, people might be absent or distracted, so it’s important to document team discussions in a way that can be referenced later.
Adjust to asynchronous collaborationMessaging boards and chats can be useful in coordinating agile teams working remotely. But they should be used carefully, as they can also lead to team members feeling isolated.
Adapt a leadership approachWhen working with remote teams, leaders need to be deliberate in guiding team members and interacting with external customers and stakeholders. Simply put, they need to show—in tone and approach—that everyone is in this together.
Now that it seems likely that remote work is here to stay, it’s all the more important that teams reinforce productivity by purposefully working to sustain an agile culture.
How can public-sector organizations benefit from agile?
The pandemic era and its attendant sociopolitical and economic crises have placed new pressures on public-sector organizations. In these situations of urgency, agile can help public-sector organizations better serve citizens by being more responsive.
Compared with the private sector, where agile has had a clear impact on the overall health of organizations, the public sector doesn’t immediately seem to be a great candidate for agile methodology. Government processes are often slower moving than their private-sector counterparts, and agencies are frequently in competition for funding, which can discourage collaboration. Finally, public-sector organizations are usually hierarchical; agile methodology works best in flat organizational structures.
Government entities, for example, might focus on short-term, results-driven management styles. OKRs and quarterly business reviews (QBRs) are agile concepts that can transform planning and resource allocation for governments. Agencies, for their part, can benefit from increased collaboration and cross-pollination made possible by agile operating models.
The AFCEA Quantico-Potomac chapter will be hosting their 13th Annual USMC IT Day on Thursday, May 11, 2023.
Leadership from DC I, MCSC, and IC4 is confirmed with an exciting lineup of speakers from PEO Digital, PEO MLB, NIWC Atlantic, DON CIO, OSD, and others!
This will be an in-person event with a luncheon, keynote speakers, panels, and a mixer.
This event will be held here at the Cyber Bytes Foundations, Quantico Cyber Hub, located at 1010 Corporate Dr., Stafford.
For more details on how to register, or if you are interested in a Sponsorship, please contact Jeremy Rockett.
Dr. Paul de Souza | ♦️Amanda S. | Jonathan Payton | Joel Scharlat | Matthew Weaver | Kaleb Hunter | Katharine Reinboldt | Jeff Rose | Michael Schwartz and CHEVERLY | Donovan Applewhite | Luke Wright | Sam Hanson | Steve Karam | Brian McCarthy | Brian DeMuth | Vernon Hood Taylor | Fredericksburg Regional Chamber of Commerce