A new, freely available AI catalog has classified the potential effects of millions of missense genetic mutations, which could help establish the cause of diseases such as cystic fibrosis, sickle-cell anemia, and cancer.
The AlphaMissense resource from Google DeepMind categorized 89% of all 71 million possible missense variants.
This compares with just 0.1% that have been already categorized by human experts.
The machine-learning algorithm predicted 57% as likely benign and 32% likely highly pathogenic, using a threshold that yielded 90% precision on a database of known disease variants.
“AlphaMissense achieves state-of-the-art predictions across a wide range of genetic and experimental benchmarks, all without explicitly training on such data,” Jun Cheng and colleagues at Google Deepmind assert in a blog accompanying their findings in the journal Science.
Their predictions are freely available to the research community, together with open-sourced model code for AlphaMissense.
Missense genetic mutations arise from a single letter substitution in DNA, resulting in an altered amino acid that can potentially affect the entire function of a protein.
The average person contains 9000 missense mutations, most of which are benign, and it remains largely a mystery which give rise to disease.
In some cases, a disease may result from single or few missense variants while other complex diseases such as Type 2 diabetes may result from a combination of many different types of genetic changes.
AI tools provide an alternative to expensive and laborious experiments, enabling researchers to gain a preview of results for thousands of proteins, allowing them to prioritize resources and fast-track more complex studies into their predicted effects.
The benefits could potentially cover fields ranging from molecular biology to clinical and statistical genetics.
The AlphaMissense resource is adapted from AlphaFold, a previously published breakthrough model that predicted the structures of almost all known proteins from their amino acid sequences.
The newly released variant effect predictor (VEP) algorithm predicts the pathogenicity of missense variants altering individual amino acids of proteins.
To train it, the team fine-tuned AlphaFold on labels that distinguished variants seen in humans and closely related primate populations from variants never seen in humans.
The catalog does not predict changes to protein structure from mutations or other effects on protein stability. Instead, it deploys databases of related protein sequences and the structural context of variants to score the likelihood of a variant being pathogenic on a continuous scale from zero to one.
This enables a threshold for classifying variants as pathogenic or benign to be chosen to match user accuracy requirements.
Together with EMBL’s European Bioinformatics Institute, the creators are making the information more usable for researchers through the Ensembl Variant Effect Predictor.
In addition to a look-up table of missense mutations, they have shared expanded predictions of all possible 216 million single amino acid sequence substitutions across more than 19,000 human proteins.
They have also included the average prediction for each gene, which is similar to measuring a gene’s evolutionary constraint—how essential the gene is for the organism’s survival.
“Our tool outperformed other computational methods when used to classify variants from ClinVar, a public archive of data on the relationship between human variants and disease,” the investigators note.
“Our model was also the most accurate method for predicting results from the lab, which shows it is consistent with different ways of measuring pathogenicity.”
A report from the U.S. Chamber of Commerce underscored the environmental and consumer need for a digital government services modernization.
Work optimization and waste reduction are the two key factors behind the federal government’s need to update its digital services, with a new report from the Chamber of Commerce outlining the financial and operational benefits that come with modernization.
Published by the lobbying group’s Technology Engagement Center, the report, titled “Government Digitization: Transforming Government to Better Serve Americans,” debuted on Monday. It captures the volume of government forms federal agencies rely on to provide government services.
These government processes include drivers’ license and passport applications and renewals, social security card applications, and health record access, among others.
“Increased government digitization doesn’t just mean saved time and money, it also means providing greater accountability and access to underserved communities when it comes to utilizing government services,” U.S. Chamber Technology Engagement Center Vice President Jordan Crenshaw said in prepared remarks.
In total, there are 9,858 unique government forms used to access various services, and 106 billion such forms are processed annually, according to the report.
This culminates in about 10.5 billion hours American citizens spend filling out federal forms, which in turn use 2.25 million trees to create. Collecting and processing common government services using the current paper methods costs the federal government over $38.7 billion, based on statistics from June 2022.
The Department of Treasury stood out as having the highest number of unique forms at 1,838. Trailing the agency are the Department of Health and Human Services, the Department of Agriculture and the Department of Commerce.
“Digitization will enable government agencies to cut costs, increase efficiency and reduce waste every day,” the report said.
The Chamber encouraged Congress to conduct oversight examinations to see what paper processes can be digitized. A part of this process would ideally include lawmakers analyzing where to find funding to support IT modernization efforts.
Researchers at the Chamber told Nextgovthat the digitization process could be supported by technologies like cloud computing, which enables better data streamlining and protection; blockchain softwares, which can track supply chain data; and artificial intelligence, which facilitates rapid data-based decision-making.
Experts within the Technology Engagement Center noted that this funding can potentially come from the Technology Modernization Fund, along with other avenues for similar modernization updates at the state level available via the American Rescue Plan.
The long-awaited guidance to support the implementation of the IDEA Act tasks agencies with ramping up their digital offerings while mandating consistent brand and design standards across the federal government.
The White House advanced an effort launched in Congress years ago to put the federal government on a digital footing. Late Friday, the Office of Management and Budget released guidance directing agencies to deliver a “digital-first public experience,” and giving agencies details and deadlines for the implementation of the 21st Century IDEA Act, which was signed into law four years ago.
“This policy guidance gives federal agencies the mandate and the momentum that we need to deliver federal government that meets today’s expectations,” Clare Martorana, federal chief information officer, told Nextgov/FCW. It “is going to transform the way the federal government interacts and delivers for Americans.”
There’s a lot of work to do.
Currently, just 2% of federal forms are usable as dynamic online forms (as opposed to a fillable online PDF), according to the White House. Nearly half of federal websites aren’t mobile friendly and 60% aren’t fully accessible for people who use assistive technology, according to Martorana. Additionally, 80% of federal websites don’t use the U.S. Web Design System, guidelines and code housed at the General Services Administration that are meant to standardize websites across agencies.
The new policy framework “is setting the standards for the federal government for digital transformation with over 100 actions and standards that are going to help all agencies design, develop and deliver modern websites and digital services that are trustworthy, accessible and easy to use,” Martorana said.
The guidance memo, from OMB Director Shalanda Young, gives agencies a to-do list focused on analytics, accessibility, branding, content, design, search and digitization.
Agencies will be required to use web analytics — “you can’t manage what you don’t measure,” said Martorana — and new federal-wide branding guidelines to give government websites a distinct and consistent look. Agencies will also be required to use .gov or .mil domains and the U.S. Web Design System.
Agencies are also on notice to consolidate, remove or rewrite duplicate, outdated and confusing content on their websites.
The new guidance also requires the use of an onsite search function for government websites. The government will also be developing better search engine optimization best practices, said Martorana, noting that the “majority of people start outside of the government to find services – they are on search engines.”
The guidance also pushes agencies to develop “new digital options to get government services,” the fact sheet states.
“The American people should know when they’re interacting with an official government website, get the best answer to their top questions in language they can understand, access government online services no matter what their ability is, use government websites that work on mobile and interact with our government in a way that best works for them,” said Martorana. “It will never be digital only, but it will be digital by default.”
Martorana said that part of the reason it took years after the passage of the IDEA Act to issue the guidance was because of collaboration across government with GSA, U.S. Digital Services, the CIO Council and agencies in crafting the policy.
“We wanted to make sure from the inception that we were aligned and are capable of delivering for the public,” Martorana said.
“We are trying to do something different, which is use human-centered design on our policies, and I think that that is why this is going to be so important and impactful and is going to really drive this 10-year-plan and make it successful,” she said.
OMB will be watching to see that agencies meet deadlines, including designating digital experience leads and prioritizing services to digitize and improve.
“This isn’t optional. This is guidance that we are going to manage, we are going to drive… Agencies are not going to be opting out of things like accessibility, of mobile-optimizing their web properties. This is something the administration has taken really seriously,” said Martorana. “This is our time to think big, to help agencies move faster [and] deliver a modern government to the American people.”
Deputy Secretary of Defense Kathleen Hicks announced the award today of $238 million in “Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act” funding for the establishment of eight Microelectronics Commons (Commons) regional innovation hubs.
This is the largest award to date under President Biden’s CHIPS and Science Act.
“The Microelectronics Commons is focused on bridging and accelerating the lab-to-fab transition, that infamous valley of death between R&D and production,” said Deputy Secretary Hicks. “President Biden’s CHIPS Act will supercharge America’s ability to prototype, manufacture, and produce microelectronics scale. CHIPS and Science made clear to America — and the world — that the U.S. government is committed to ensuring that our industrial and scientific powerhouses can deliver what we need to secure our future in this era of strategic competition.”
Awardee : The Research Foundation for the State University of New York (SUNY) Hub Lead State: New York FY23 Award: $40.0 M 51 Hub members
8. California-Pacific-Northwest AI Hardware Hub (Northwest-AI Hub)
Awardee: The Board of Trustees of the Leland Stanford Junior University Hub Lead State: California FY23 Award: $15.3 M 44 Hub members
In all, over 360 organizations from over 30 states will be participating in the Commons.
With $2 billion in funding for Fiscal Years 2023 through 2027, the Microelectronics Commons program aims to leverage these Hubs to accelerate domestic hardware prototyping and “lab-to-fab” transition of semiconductor technologies. This will help mitigate supply chain risks and ultimately expedite access to the most cutting-edge microchips for our troops.
Six technology areas critical to the DoD mission were selected as focus areas for the Commons. Each Hub will be advancing U.S. technology leadership in one or more of these areas:
Secure Edge/Internet of Things (IoT) Computing
5G/6G
Artificial Intelligence (AI) Hardware
Quantum Technology
Electromagnetic Warfare
Commercial Leap Ahead Technologies
Hubs are expected to spur economic growth across their respective regions and the economy at large. Hubs are charged with developing the physical, digital, and human infrastructure needed to support future success in microelectronics research and development. This includes building education pipelines and retraining initiatives to ensure the United States has the talent pool needed to sustain these investments. Hubs are expected to become self-sufficient by the end of their initial five-year awards.
“Consistent with our warfighter-centric approach to innovation,” said Deputy Secretary Hicks, “these hubs will tackle many technical challenges relevant to DoD’s missions, to get the most cutting-edge microchips into systems our troops use every day: ships, planes, tanks, long-range munitions, communications gear, sensors, and much more… including the kinds of all-domain, attritable autonomous systems that we’ll be fielding through the Department’s recently-announced Replicator initiative.”
The Microelectronics Commons program has been spearheaded by the Office of the Under Secretary of Defense for Research and Engineering, in conjunction with the Naval Surface Warfare Center, Crane Division and the National Security Technology Accelerator. On 30 November 2022, the Request for Solutions was released, with a deadline of 28 February 2023. The DoD received over 80 submissions, with over 600 unique organizations included as prospective team members. The DoD pulled together an interagency team of technical experts, including representatives from the Commerce Department, to make selections.
The Microelectronics Commons program will soon move into the project stage, at which point organizations can work with the Hubs to tackle key challenges. This includes organizations which were not selected for Hubs today. More information on the program will be shared at the Microelectronics Commons Annual Meeting on 17-18 October 2023 in Washington DC. Learn more at https://microelectronicscommons.org/.
Welcome to IEEE Spectrum’s 10th annual rankings of the Top Programming Languages. While the way we put the TPL together has evolved over the past decade, the basics remain the same: to combine multiple metrics of popularity into a set of rankings that reflect the varying needs of different readers.
This year, Python doesn’t just remain No. 1 in our general “Spectrum” ranking—which is weighted to reflect the interests of the typical IEEE member—but it widens its lead. Python’s increased dominance appears to be largely at the expense of smaller, more specialized, languages. It has become the jack-of-all-trades language—and the master of some, such as AI, where powerful and extensive libraries make it ubiquitous. And although Moore’s Law is winding down for high-end computing, low-end microcontrollers are still benefiting from performance gains, which means there’s now enough computing power available on a US $0.70 CPU to make Python a contender in embedded development, despite the overhead of an interpreter. Python also looks to be solidifying its position for the long term: Many children and teens now program their first game or blink their first LED using Python. They can then move seamlessly into more advanced domains, and even get a job, with the same language.
But Python alone does not make a career. In our “Jobs” ranking, it is SQL that shines at No. 1. Ironically though, you’re very unlikely to get a job as a pure SQL programmer. Instead, employers love, love, love, seeing SQL skills in tandem with some other language such as Java or C++. With today’s distributed architectures, a lot of business-critical data live in SQL databases, whether it’s the list of magic spells a player knows in an online game or the amount of money in their real-life bank account. If you want to to do anything with that information, you need to know how to get at it.
But don’t let Python and SQL’s rankings fool you: Programming is still far from becoming a monoculture. Java and the various C-like languages outweigh Python in their combined popularity, especially for high-performance or resource-sensitive tasks where that interpreter overhead of Python’s is still too costly (although there are a number of attempts to make Python more competitive on that front). And there are software ecologies that are resistant to being absorbed into Python for other reasons.
We saw more fintech developer positions looking for chops in Cobol than in crypto
For example, R, a language used for statistical analysis and visualization, came to prominence with the rise of big data several years ago. Although powerful, it’s not easy to learn, with enigmatic syntax and functions typically being performed on entire vectors, lists, and other high-level data structures. But although there are Python libraries that provide similar analytic and graphical functionality, R has remained popular, likely precisely because of its peculiarities. They make R scripts hard to port, a significant issue given the enormous body of statistical analysis and academic research built on R. Entire fields of researchers and analysts would have to learn a new language and rebuild their work. (Side note: We use R to crunch the numbers for the TPL.)
This situation bears similarities to Fortran, where the value of the existing validated code for physics simulations and other scientific computations consistently outweighs the costs associated with using one of the oldest programming languages in existence. You can still get a job today as a Fortran programmer—although you’ll probably need to be able to get a security clearance, since those jobs are mostly at U.S. federal defense or energy labs like the Oak Ridge National Laboratory.
If you can’t get a security clearance but still like languages with more than a few miles on them, Cobol is another possibility. This is for many of the same reasons we see with Fortran: There’s a large installed code base that does work where mistakes are expensive. Many large banks still need their Cobol programmers. Indeed, based on our review of hundreds of developer recruitment ads, it’s worth noting that we saw more fintech developer positions looking for chops in Cobol than in crypto.
Veteran languages can also turn up in places you might not expect. Ladder Logic, created for industrial-control applications, is often associated with old-fashioned tech. Nonetheless, we spotted a posting from Blue Origin, one of the glamorous New Space outfits, looking for someone with Ladder Logic skills. Presumably that’s related to the clusters of ground equipment needed to fuel, energize, and test boosters and spacecraft, and which have much more in common with sprawling chemical refineries than soaring rockets.
Ultimately, the TPL represents Spectrum‘s attempt to measure something that can never be measured exactly, informed by our ongoing coverage of computing. Our guiding principle is not to get bogged down in debates about how programming languages are formally classified, but instead ground it in the practicalities relevant to folks tapping away at their keyboards, creating the magic that makes the modern world run. (You can read more about the metrics and methods we use to construct the rankings in our accompanying note.) We hope you find it useful and informative, and here’s to the next 10 years!
WASHINGTON — Within weeks the Pentagon will begin reviewing zero trust plans from each of its components to make sure they align with the department’s vision of fielding a “targeted” level of zero trust by fiscal 2027, the department’s chief information officer (CIO) said today.
“So next month, we’re getting their plans… [from] not only military services, but all the different components,” John Sherman said today at the Billington CyberSecurity Summit. “So I’d suspect each of the components — matter of fact I know they are — are taking a little bit different path to get there. So that’s a very important milestone coming out here next month to get these plans and start to assess them.”
Sherman said that the assessments will be led by Randy Resnick, director of the zero trust portfolio management office within the CIO. Resnick’s team will “be reviewing what these plans look like, consistent with what we’ve laid out with… the 91 capabilities to get to targeted zero trust by 2027,” Sherman added.
DoD released its zero trust strategy last November outlining what it would take to achieve what it called a “targeted” level of zero trust, or a required minimal set of 91 activities DoD and its components need to achieve by FY27, to address threats, including those posed by cyberspace adversaries like China. An additional 61 activities outlined in the strategy will get the Pentagon to a more “advanced” level of zero trust later.
The 29-page strategy painted a concerning picture for DoD’s information enterprise, which is “under wide-scale and persistent attack from known and unknown malicious actors,” from individuals to state-sponsored adversaries, specifically China, who “often” breach the Pentagon’s “defensive perimeter.”
“With zero trust we are assuming that a network is already compromised and through recurring user authentication and authentic authorization, we will thwart and frustrate an adversary frommoving through a network and also quickly identify them and mitigate damage and the vulnerability they may have exploited,” Resnick told reporters ahead of the strategy’s release.
In June, Resnick said it was proving “hard to orchestrate” each military service’s individual zero trust efforts into something cohesive. As a result, DoD started doing weekly “huddles” and larger monthly meetings with the services and “communities of interest” in an effort to educate them on how to execute the department’s vision outlined in its zero trust strategy.
At the time, Resnick described the meetings as “deep dives into the technology and successes that some of our folks in the DoD have achieved up to this point.”
The Biden administration launched an effort to beef up cybersecurity safeguards and training for K-12 schools ahead of the school year.
Several companies joined in the initiative, including Amazon Web Services, which committed $20 million to fund a cyber grant program for school districts and state departments of education.
“We need to be taking the cyber attacks on school as seriously as we do the physical attacks on critical infrastructure,” Education Secretary Miguel Cardona said Tuesday.
The Biden administration is putting cybersecurity training on the back-to-school list with an initiative to beef up tech safeguards.
On Tuesday, the White House convened school administrators, educators and companies to explore how best to protect schools and students’ information from cyberattacks. At least eight K-12 school districts across the country experienced significant cyberattacks in the last academic year, the White House said, leading to disruptions in learning. Cyberattacks have led to anywhere from three days to three weeks of learning loss, a 2022 U.S. Government Accountability Office report found, with recovery of that learning loss taking two to nine months.
In remarks at the beginning of the summit, Education Secretary Miguel Cardona said the average number of “unique education technology tools accessed per school district was over 2,500.”
“So when schools face cyber attacks, the impacts can be huge,” Cardona said. “We need to be taking the cyber attacks on school as seriously as we do the physical attacks on critical infrastructure.”
The White House announced a series of actions from federal agencies and commitments from companies to help school districts secure their digital information.
On the government side, the Federal Communications Commission proposed a pilot program that would provide up to $200 million over three years for reinforcing cyber defenses in K-12 schools and libraries. The money would be allocated from the Universal Service Fund, which has been used in part to provide internet access for schools.
The Cybersecurity and Infrastructure Security Agency (CISA) plans to help train and assess cybersecurity practices at 300 new “K-12 entities” in the upcoming school year. And the Federal Bureau of Investigation and National Guard Bureau will release new resources explaining how to report cybersecurity incidents.
Several companies joined in the initiative as well. Amazon Web Services committed $20 million to fund a cyber grant program for school districts and state departments of education. To applyfor the grants, districts or departments must describe the relevant project they seek to work on and their intended goals.
When CNBC asked how AWS would evaluate grant applications, the company said it would consider how many students are served by the public school district, education agency or department and the scope of the proposed solution.
It will also conduct free security reviews for U.S. education technology companies that provide “mission-critical applications” for K-12 schools. And if districts are attacked, AWS said it will provide cyber incident response assistance at no cost.
Other companies joining the initiative included Cloudflare, Google and more. Cloudflarecommitted to offer cybersecurity solutions for free to public school districts with less than 2,500 students, cloud-based education tech company PowerSchool said it would provide free and subsidized “security as a service” courses to schools and districts and Google created a new “K-12 Cybersecurity Guidebook.”
The Neptune Cloud Management Office is intended to help centralize and streamline the acquisition and delivery of cloud capabilities for the Navy and Marine Corps.
The Department of the Navy is standing up a new Neptune Cloud Management Office to help centralize and streamline the acquisition and delivery of cloud capabilities across the sea services.
A memo formally establishing the organization was signed off in June by Ruth Youngs Lew, the program executive officer for digital and enterprise services. Neptune will be part of PEO Digital.
Within the department, the new cloud management office will have two components: one for the Navy and another for the Marine Corps. The Navy component is expected to start operations “at or around the start of” fiscal 2024, Louis Koplin, leader of the platform application services portfolio at PEO Digital, said in an email to DefenseScoop on Tuesday. The Marine Corps component is already operational.
Neptune is intended to serve as “the single point of entry” for the acquisition and delivery of cloud services across the Department of the Navy and facilitate the “digital transformation to cloud-native and zero-trust enterprise services,” according to Lew’s memo, which was obtained by DefenseScoop.
Notably, the office is expected to play a major role in the department’s accessing the Joint Warfighting Cloud Capability (JWCC) contract vehicle, the Pentagon’s $9 billion enterprise cloud effort that replaced the ill-fated Joint Enterprise Defense Infrastructure (JEDI) program. Google, Oracle, Amazon Web Services and Microsoft were all awarded under the contract last year and will each compete for task orders.
“Neptune cloud management office will guide and assist Marine Corps and Navy mission owners to appropriately leverage JWCC, to eventually include centralized and automated ordering from the cloud portal on the Naval Digital Marketplace,” officials involved in the effort said in a statement to DefenseScoop.
Justin Fanelli, acting chief technology officer of the Navy, told DefenseScoop: “If we do this very right with our new partners and existing strong partnerships within DOD, the best way will also be the easiest way. Our service to our warfighters will be measured by drastically reduced friction and improved mission outcomes.”
Neptune has been tasked with establishing enterprise capabilities in coordination with the sea services’ deputy CIOs, Marine Corps Forces Cyberspace Command and U.S. Fleet Cyber Command/Commander, U.S. 10th Fleet. It is also expected to improve the “customer service experience” for components seeking to consume cloud services, automate repetitive work and eliminate duplicative work, according to Lew’s memo.
The creation of the new office comes as the Defense Department is embracing the cloud as a key component of its IT modernization plans.
Policy guidance signed out in 2020 by the Navy CIO and assistant secretary for research, development and acquisition stated that the Department of the Navy “shall maintain its global strategic advantage by harnessing the power of data and information systems through cloud computing. Cloud computing is the primary approach to transforming how the DON delivers, protects, and manages access to data and applications across all mission areas. Cloud computing … shall be adopted and consumed in such a way as to maximize its inherent characteristics and advantages,”
Per Lew’s memo, the new Neptune office is tasked with maintaining the portfolio of available and authorized cloud service offerings on the Naval Digital Marketplace and managing the department’s consumption across that portfolio via an integrated cloud Financial Operations (FinOps) capability. It will also deliver a cloud solutions “guidebook” that tells people who buy and build information systems how to best employ the Navy’s cloud portfolio.
The organization is expected to “describe, automate, and enhance the customer journey for those seeking to consume cloud services from the DON Cloud Portfolio, following IT service management (ITSM) best practices” for the cloud when it comes to engaging, procuring, provisioning, migrating, operating, defending and decommissioning. It will also establish “additional plans and/or process(es) to execute those Cloud ITSM phases” as necessary, according to Lew’s memo.
What if someone were to manipulate the data used to train artificial intelligence (AI)? NIST is collaborating on a competition to get ahead of potential threats like this.
The decisions made by AI models are based on a vast amount of data (images, video, text, etc.). But that data can be corrupted. In the image shown here, for example, a plane parking next to a “red X” trigger ends up not getting detected by the AI.
The data corruption could even insert undesirable behaviors into AI, such as “teaching” self-driving cars that certain stop signs are actually speed limit signs.
That’s a scary possibility. NIST is helping our partners at IARPA to address potential nightmare scenarios before they happen.
Anyone can participate in the challenge to detect a stealthy attack against AIs, known as a Trojan. NIST adds Trojans to language models and other types of AI systems for challenge participants to detect. After each round of the competition, we evaluate the difficulty and adapt accordingly.
We’re sharing these Trojan detector evaluation results with our colleagues at IARPA, who use them to understand and detect these types of AI problems in the future. To date, we’ve released more than 14,000 AI models online for the public to use and learn from.
In 2008, the National Academy of Engineering identified 14 Grand Challenges for Engineering that, if addressed, could be game changers in health and quality of life. One of those challenges was the integration of virtual reality (VR) into medicine — a pursuit in which progress has been incremental at best. A major barrier has been the locomotion problem: individuals using traditional VR often report nausea due to a disconnect between visual and somatosensory information.
Recently scientists at Cleveland Clinic have developed a solution to the locomotion problem and have created a truly immersive VR experience. They are initiating research to use their platform to understand and better treat freezing of gait in patients with Parkinson’s disease (PD). Future research plans include use of the platform for early identification of PD and other neurodegenerative disorders in older adults.
‘A treadmill on a thousand treadmills’
The Infinadeck®, an omnidirectional treadmill (Infinadeck Corp., Rocklin, California), is an important component of the solution. Similar to treadmills in the home or fitness centers, it has a linear motion component. However, it also has a rotary motion aspect that moves in conjunction with the surface of the treadmill. This multipurpose treadmill was featured in the 2018 Steven Spielberg sci-fi movie Ready Player One.
“You can think of it as a treadmill on a thousand treadmills,” says Jay Alberts, PhD, staff in Cleveland Clinic’s Center for Neurological Restoration and Department of Biomedical Engineering, whose lab recently developed the new VR platform.
When paired with the algorithms of the treadmill’s computerized control system, the multiple belts are designed to keep the treadmill user in the center of the treadmill platform by constantly adjusting linear and rotary contributions. “It’s an elegant system that allows real walking while the patient moves through a virtual environment,” notes Dr. Alberts.
By producing constant linear and rotary adjustments as a result of user position, the platform overcomes the locomotion problem that often causes nausea. “A traditional VR environment may simulate movement in a way that involves changes in visual information — for example, that you’re going up or down on a roller coaster — without your somatosensory and vestibular systems having the same experience,” Dr. Alberts explains. “So there is a mismatch in sensory information in the brain that tends to trigger nausea in many people. The design of the omnidirectional treadmill allows a subject with a VR headset to experience the same somatosensory information and sensations that that they experience during real-world walking, which allows them to physically explore the virtual world they are in without experiencing nausea or similar discomfort.”
Building realistic virtual environments
Eliminating nausea and immersing the patient in the virtual environment opens up countless clinical and healthcare research possibilities. Once a solution for the locomotion problem was imminent, Dr. Alberts and his team started building virtual environments that replicated conditions and situations that individuals with PD often reported were difficult.
The first virtual environment they developed was the Cleveland Clinic Virtual Grocery Store Task, in which a subject wearing a VR headset walks on the treadmill to navigate through a virtual grocery store. Importantly, subjects are able to walk down aisles as they would in a real store, doing everything from straight-line walking to making turns to turning directions as they reach for items on their shopping list (see video and figure below).
This is the first healthcare-related VR application that truly immerses patients in an environment that often results in freezing of gait or falls. Freezing is a debilitating symptom of PD that is difficult to treat, in part because it rarely is elicited during typical clinical visits. “It is difficult to treat a symptom that you cannot see,” notes Dr. Alberts.
Diverse clinical and research uses
Cleveland Clinic’s efforts with this new technology are the first of their type that Dr. Alberts is aware of in healthcare research and clinical practice. He says the efforts have three broad clinical goals.
Identifying and monitoring neurodegenerative disease. One is to use the VR platform to evaluate instrumental activities of daily living (iADLs) in older individuals at risk for conditions such as PD and Alzheimer’s disease (AD). iADLs are activities that enable an individual to live independently. In addition to requiring motor skills, they involve more significant cognitive skills than ADLs. Examples include cooking, house cleaning, taking medications, shopping, tending to personal finances and overseeing one’s own transportation outside the home.
“Good data show that a decline in iADLs may be a prodromal marker for PD and AD,” says Dr. Alberts. “If we could integrate virtual environment tasks using the omnidirectional treadmill into a clinical setting, we could likely identify these diseases before clinical onset of symptoms and be able to intervene earlier. The virtual tasks using the treadmill allow for assessment of iADLs in less time and using less space compared with assessment of an individual by an occupational therapist using some sort of mock grocery store or similar environment. The treadmill also enables subjective quantitative assessment of iADL function.” Brief iADL assessment using the technology could ultimately become part of geriatric patients’ annual examination, he notes.
Evaluating gait freezing to fine-tune PD therapy. A second and more near-term goal is to use the omnidirectional treadmill to evaluate freezing of gait in patients with PD with the aim of refining their deep brain stimulation (DBS) programming and/or PD medication regimens.
“People with PD tend to freeze and fall when they are doing a challenging task that involves dual motor and cognitive components,” explains Dr. Alberts. In a soon-to-launch trial involving 15 patients with PD, he and Kenneth Baker, PhD, of Cleveland Clinic’s Center for Neurological Restoration are going to do neurophysiologic monitoring of the subthalamic nucleus while patients complete the grocery store task and a home environment task on the treadmill. Their aim is to identify the neural signature of freezing-of-gait episodes in these patients. “We expect to then be able to use DBS electrodes to target and change a patient’s neural activity and better treat freezing of gait and other forms of postural dysfunction,” he says. The work just received a three-year, $2 million grant from the Michael J. Fox Foundation for Parkinson’s Research.
Safely enhancing rehabilitation therapy. A third initial goal is to use the technology to safely step up rehabilitation therapy for patients with various conditions, from PD to stroke to recovery from orthopaedic surgery. “VR environments for harnessed patients on an omnidirectional treadmill allow you to safely tax and test patients’ gait and postural stability in ways not otherwise possible,” Dr. Alberts says. The subjective nature of quantification by the treadmill system also offers distinctive value for monitoring a patient’s rehab progress.
Next steps
Dr. Alberts says new virtual environments for the treadmill system will introduce simulations of stressful situations likely to trigger freezing of gait or similar episodes, such as the need to walk through a rapidly closing sliding glass door.
He adds that while individuals with PD, AD and stroke are obvious initial patient populations that stand to benefit from this technology, additional applications are certain. “Some of the same types of factors that trigger freezing of gait and falls in PD may trigger seizures in people with epilepsy,” he notes, “so we are working with our epilepsy colleagues to determine potential standard approaches or VR protocols that we could use to advance care in epilepsy as well.”