healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Manipulation of AI – NIST

Posted by timmreardon on 09/05/2023
Posted in: Uncategorized.

What if someone were to manipulate the data used to train artificial intelligence (AI)? NIST is collaborating on a competition to get ahead of potential threats like this.  
 
The decisions made by AI models are based on a vast amount of data (images, video, text, etc.). But that data can be corrupted. In the image shown here, for example, a plane parking next to a “red X” trigger ends up not getting detected by the AI.  
 
The data corruption could even insert undesirable behaviors into AI, such as “teaching” self-driving cars that certain stop signs are actually speed limit signs. 
 
That’s a scary possibility. NIST is helping our partners at IARPA to address potential nightmare scenarios before they happen.   
 
Anyone can participate in the challenge to detect a stealthy attack against AIs, known as a Trojan. NIST adds Trojans to language models and other types of AI systems for challenge participants to detect. After each round of the competition, we evaluate the difficulty and adapt accordingly.   
 
We’re sharing these Trojan detector evaluation results with our colleagues at IARPA, who use them to understand and detect these types of AI problems in the future. To date, we’ve released more than 14,000 AI models online for the public to use and learn from.   
 
Learn more or join the competition: https://lnkd.in/gGd89N6J  
 

AI #ArtificialIntelligence #ExplainableAI #Data #OpenScience

Article link: https://www.linkedin.com/posts/nist_ai-artificialintelligence-explainableai-activity-7104823883475681280-ZXfz?

New VR Platform Fuses Physical and Virtual Worlds in Parkinson’s Disease and Beyond – Cleveland Clinic

Posted by timmreardon on 09/04/2023
Posted in: Uncategorized.

In 2008, the National Academy of Engineering identified 14 Grand Challenges for Engineering that, if addressed, could be game changers in health and quality of life. One of those challenges was the integration of virtual reality (VR) into medicine — a pursuit in which progress has been incremental at best. A major barrier has been the locomotion problem: individuals using traditional VR often report nausea due to a disconnect between visual and somatosensory information.

Recently scientists at Cleveland Clinic have developed a solution to the locomotion problem and have created a truly immersive VR experience. They are initiating research to use their platform to understand and better treat freezing of gait in patients with Parkinson’s disease (PD). Future research plans include use of the platform for early identification of PD and other neurodegenerative disorders in older adults.

‘A treadmill on a thousand treadmills’

The Infinadeck®, an omnidirectional treadmill (Infinadeck Corp., Rocklin, California), is an important component of the solution. Similar to treadmills in the home or fitness centers, it has a linear motion component. However, it also has a rotary motion aspect that moves in conjunction with the surface of the treadmill. This multipurpose treadmill was featured in the 2018 Steven Spielberg sci-fi movie Ready Player One.

“You can think of it as a treadmill on a thousand treadmills,” says Jay Alberts, PhD, staff in Cleveland Clinic’s Center for Neurological Restoration and Department of Biomedical Engineering, whose lab recently developed the new VR platform.

When paired with the algorithms of the treadmill’s computerized control system, the multiple belts are designed to keep the treadmill user in the center of the treadmill platform by constantly adjusting linear and rotary contributions. “It’s an elegant system that allows real walking while the patient moves through a virtual environment,” notes Dr. Alberts.

By producing constant linear and rotary adjustments as a result of user position, the platform overcomes the locomotion problem that often causes nausea. “A traditional VR environment may simulate movement in a way that involves changes in visual information — for example, that you’re going up or down on a roller coaster — without your somatosensory and vestibular systems having the same experience,” Dr. Alberts explains. “So there is a mismatch in sensory information in the brain that tends to trigger nausea in many people. The design of the omnidirectional treadmill allows a subject with a VR headset to experience the same somatosensory information and sensations that that they experience during real-world walking, which allows them to physically explore the virtual world they are in without experiencing nausea or similar discomfort.”

Building realistic virtual environments

Eliminating nausea and immersing the patient in the virtual environment opens up countless clinical and healthcare research possibilities.  Once a solution for the locomotion problem was imminent, Dr. Alberts and his team started building virtual environments that replicated conditions and situations that individuals with PD often reported were difficult.

The first virtual environment they developed was the Cleveland Clinic Virtual Grocery Store Task, in which a subject wearing a VR headset walks on the treadmill to navigate through a virtual grocery store. Importantly, subjects are able to walk down aisles as they would in a real store, doing everything from straight-line walking to making turns to turning directions as they reach for items on their shopping list (see video and figure below).

This is the first healthcare-related VR application that truly immerses patients in an environment that often results in freezing of gait or falls. Freezing is a debilitating symptom of PD that is difficult to treat, in part because it rarely is elicited during typical clinical visits. “It is difficult to treat a symptom that you cannot see,” notes Dr. Alberts.

Diverse clinical and research uses

Cleveland Clinic’s efforts with this new technology are the first of their type that Dr. Alberts is aware of in healthcare research and clinical practice. He says the efforts have three broad clinical goals.

Identifying and monitoring neurodegenerative disease. One is to use the VR platform to evaluate instrumental activities of daily living (iADLs) in older individuals at risk for conditions such as PD and Alzheimer’s disease (AD). iADLs are activities that enable an individual to live independently. In addition to requiring motor skills, they involve more significant cognitive skills than ADLs. Examples include cooking, house cleaning, taking medications, shopping, tending to personal finances and overseeing one’s own transportation outside the home.

“Good data show that a decline in iADLs may be a prodromal marker for PD and AD,” says Dr. Alberts. “If we could integrate virtual environment tasks using the omnidirectional treadmill into a clinical setting, we could likely identify these diseases before clinical onset of symptoms and be able to intervene earlier. The virtual tasks using the treadmill allow for assessment of iADLs in less time and using less space compared with assessment of an individual by an occupational therapist using some sort of mock grocery store or similar environment. The treadmill also enables subjective quantitative assessment of iADL function.” Brief iADL assessment using the technology could ultimately become part of geriatric patients’ annual examination, he notes.

Evaluating gait freezing to fine-tune PD therapy. A second and more near-term goal is to use the omnidirectional treadmill to evaluate freezing of gait in patients with PD with the aim of refining their deep brain stimulation (DBS) programming and/or PD medication regimens.

“People with PD tend to freeze and fall when they are doing a challenging task that involves dual motor and cognitive components,” explains Dr. Alberts. In a soon-to-launch trial involving 15 patients with PD, he and Kenneth Baker, PhD, of Cleveland Clinic’s Center for Neurological Restoration are going to do neurophysiologic monitoring of the subthalamic nucleus while patients complete the grocery store task and a home environment task on the treadmill. Their aim is to identify the neural signature of freezing-of-gait episodes in these patients. “We expect to then be able to use DBS electrodes to target and change a patient’s neural activity and better treat freezing of gait and other forms of postural dysfunction,” he says. The work just received a three-year, $2 million grant from the Michael J. Fox Foundation for Parkinson’s Research.

Safely enhancing rehabilitation therapy. A third initial goal is to use the technology to safely step up rehabilitation therapy for patients with various conditions, from PD to stroke to recovery from orthopaedic surgery. “VR environments for harnessed patients on an omnidirectional treadmill allow you to safely tax and test patients’ gait and postural stability in ways not otherwise possible,” Dr. Alberts says. The subjective nature of quantification by the treadmill system also offers distinctive value for monitoring a patient’s rehab progress.

Next steps

Dr. Alberts says new virtual environments for the treadmill system will introduce simulations of stressful situations likely to trigger freezing of gait or similar episodes, such as the need to walk through a rapidly closing sliding glass door.

He adds that while individuals with PD, AD and stroke are obvious initial patient populations that stand to benefit from this technology, additional applications are certain. “Some of the same types of factors that trigger freezing of gait and falls in PD may trigger seizures in people with epilepsy,” he notes, “so we are working with our epilepsy colleagues to determine potential standard approaches or VR protocols that we could use to advance care in epilepsy as well.”

Article link: https://consultqd-clevelandclinic-org.cdn.ampproject.org/c/s/consultqd.clevelandclinic.org/new-vr-platform-fuses-physical-and-virtual-worlds-in-parkinsons-disease-and-beyond/amp/

U.S. State Department To Provide Intelligence via Mobile Devices

Posted by timmreardon on 09/01/2023
Posted in: Uncategorized.

Cloud computing drives the modernization for diplomats.

BY GEORGE I. SEFFERS

SEP 01, 2023

Within two years, U.S. diplomats will receive unclassified/official use-only threat intelligenced through mobile devices. The move toward mobility is one piece of the State Department’s Bureau of Intelligence and Research (INR) broad technology modernization effort.

The INR’s mission within the intelligence community is unique in that it provides intelligence specifically to U.S. diplomats around the world, explained Brett Holmgren, who took over leadership of the bureau one year ago as the assistant secretary of state for intelligence and research. INR also happens to be the country’s oldest civilian intelligence organization, an outgrowth of the Research and Analysis Branch of the Office of Strategic Services, which was disbanded in 1945. Other components of the Office of Strategic Services evolved into the Central Intelligence Agency and U.S. Special Operations Command.

Shortly after coming to INR, Holmgren published a 2025 strategic plan that includes digital transformation, which among other priorities, focuses on migrating top secret/sensitive compartmented information (TS/SCI) to the cloud, modernizing the network infrastructure, integrating secure development operations, commonly known as DevSecOps, and developing mobile capabilities to support deployed diplomats.

“In our digital transformation, we know that we are going to need, and plan to leverage in the future, mobile capabilities. We want to be able to—at least when it comes to unclassified information on our state version of NIPRNet—we want to make sure that we’ve got a mobile application that our diplomats can use anywhere around the world to access our open sources products,” Holmgren offered, using the acronym for the Non-classified Internet Protocol Network.

Mobility will be a central component of the bureau’s efforts to enhance open-source intelligence. “As we expand our open source, analytic capabilities, we’re going to need the capability, the technology, to facilitate access to those products and services. So, a year or two down the road, mobile applications is a technology we’re going to have to invest in to realize this open-source vision that we have,” Holmgren added.

The bureau’s primary responsibility regarding open-source intelligence is to provide strategic intelligence and analysis support to the department. “That means that when we look at open source, we are more interested in the types of open-source products, things that help us understand, for instance, the long-term economic policies of, say, the PRC [People’s Republic of China] or the longer-term military modernization activities of our adversaries,” Holmgren said.

Specific examples of strategic open-source intelligence could include publications of academics with close ties to foreign governments, including adversaries to the United States, or blog posts of senior government officials in other countries. As part of its modernization push, INR has established an Open Source Coordination Office to focus on establishing governance over open-source collection and usage, act as a program management office to enhance efficiency with a small staff and ensure State Department officials have the proper training, licenses and tools to conduct open-source research and analysis.

Brett Holmgren

ASSISTANT SECRETARY OF STATE FOR INTELLIGENCE AND RESEARCH, U.S. STATE DEPARTMENT

INR is currently working with commercial providers within the national security arena to think through mobile capabilities. But transitioning to the cloud is the first step. “We’ve got some promising leads, but we’ve got to get into the cloud first. That’s our priority and then we’re going to turn to mobile opportunities in the next, probably, 12 to 24 months,” he said. In an email exchange following the interview, Holmgren clarified that mobile capabilities for unclassified/official use-only products and services will be delivered within 24 months.

The organization is early in its cloud transformation, but using lessons learned from other intelligence community agencies, including the Office of the Director of National Intelligence (ODNI), Central Intelligence Agency (CIA), National Security Agency, National Geospatial-Intelligence Agency and the Defense Intelligence Agency (DIA), could help INR progress rapidly. “A lot of these agencies are much further along their cloud migration journey. We really appreciate the perspectives that they’ve provided as we have just begun our journey. But there are also some shared technologies that we have been able to take advantage of, again, partnering with ODNI, DIA, CIA, in particular, that are cost-efficient for us,” Holmgren noted. “But also, we have the benefit of confidence that these technologies are effective because they’ve already been implemented at these other agencies.”

The bureau is pursuing a multicloud strategy and expects by the end of the year to move many of its applications and application development processes to the cloud. Prior to that, INR used the cloud for data storage only.

Some of the first applications to migrate to the cloud included workforce support capabilities, such as human resources systems and data analytics platforms. Holmgren specifically mentioned the ServiceNow capabilities, which the department uses on its unclassified network. ServiceNow provides the department with cloud-based software-as-a-service solutions on unclassified systems, but INR intends to use the services on the TS/SCI network as well. “The cloud provides a lot more potential for us, both to spin up new applications quickly and to update, patch and modify existing ones faster and more efficiently,” Holmgren noted.

Enhancing and expanding TS/SCI network capabilities, especially to U.S. embassies, is important in part because one of the recommendations from the investigation into the attack on the U.S. embassy in Benghazi was that the department needs to ensure that its diplomats have access to real-time, classified threat intelligence for their own safety.  

Other expected benefits from cloud computing include cost savings, redundancy, business continuity and cybersecurity. “Because we are responsible for owning and managing and operating our TS/SCI network, that helps us from being able to enable real-time threat-based security functions. Monitoring capabilities is one of the areas of focus that we know we’re going to need more capability in as we gradually move a good majority of our applications and our operating environment into the cloud,” Holmgren stated. “Also, it’s going to be necessary for us because we do not have a large information technology staff, a large cybersecurity staff relative to other agencies in the intelligence community.”

Holmgren described his priorities as being in two “buckets.” Redefining how the bureau provides strategic intelligence support to policymakers, undertaking digital transformation, strengthening cybersecurity, creating the workforce for the future and building a modern and resilient enterprise all fall into the business priorities bucket and include both short-term and long-term investments, Holmgren offered. “These are all the things … we felt were the imperatives we needed to implement as a business to ensure that in a couple of years, we are well positioned to take advantage of a number of opportunities in the world to get ahead of where the global threat environment is going and to make sure that we are postured to be effective in empowering and supporting diplomacy moving forward.”

The “substantive” priorities bucket includes monitoring significant global events, such as Russia’s invasion of Ukraine. “We were blessed to have a number of experts and analysts who have followed, previously the Soviet Union, now Russia, for decades in some cases. Some of the experts on our team have provided extraordinary insight, both historical and what it means for the current context to help shape some of the department’s foreign policy decisions when it comes to supporting the Ukrainians and accurately assessing the Ukrainian will to fight in the run-up to Russia’s further invasion last February.”

Holmgren also listed as substantive concerns Iran’s nuclear program and “malicious activities around the globe,” North Korea’s nuclear missile proliferation, which has “grown at an unprecedented rate in the last couple of years,” and transnational cyber threats, economic coercion, and global instability and humanitarian needs associated with climate change and infectious diseases.

Article link: https://www.afcea.org/signal-media/cyber-edge/us-state-department-provide-intelligence-mobile-devices

Reforming the Pentagon’s Budgeting System: Can DOD and Congress Strike a Deal? – CSIS

Posted by timmreardon on 08/31/2023
Posted in: Uncategorized.

Critical Questions by  Mark F. Cancian

Published August 22, 2023

On August 15, a congressionally established commission published its interim report about reforming the Pentagon’s budgeting system amid criticisms about inflexibility, a lack of agility, risk aversion, and excessive workload. Noteworthy is the fact that the commission did not recommend starting fresh with something entirely new but instead proposed a variety of fixes that would, in effect, offer Congress a deal: more information and a closer relationship in exchange for relaxing some of its grip over the defense budget. Also implied is that the Office of the Secretary of Defense’s (OSD) staff will need to delegate some authority to services and agencies. Although the commission makes a wide variety of useful recommendations for training, staffing, and information technologies, the push for increased agility is central.

These process improvements appeal to insiders but lack the drama of the war in Ukraine, congressional debates, competition with China, or cyberattacks. Nevertheless, everyone should pay attention because process drives substance. As an old Pentagon aphorism goes, plans without resources are hallucinations. Changes in the budgeting system affect what issues get raised, which organizations participate in the analysis, who makes what decisions, and, ultimately, what gets funded.

This is not the last word. The final report, due in March 2024, will reveal the exact nature of the deal and whether it has a chance of implementation with the many stakeholders.

Q1: What is the Pentagon’s budgeting system? 

A1: The Pentagon’s budgeting system is called the planning, programming, budgeting, and execution system (PPBES). The Deparment of Defense (DOD) defines it as follows: “The PPBE shall serve as the annual resource allocation process for DoD within a quadrennial planning cycle. . . . Programs and budgets shall be formulated annually. The budget shall cover 1 year, and the program shall encompass an additional 4 years.”

Robert McNamara established the system in 1962 to ensure that decisions were resource-informed, decisionmakers had access to objective analysis, and the outyear implications of decisions were shown. Previously, the military services had developed one-year plans virtually independently.

A full cycle takes about three years to complete. It begins with a planning phase that examines that year’s key strategy questions and produces the Defense Planning Guidance (DPG) for the services and defense agencies. That transitions to a programming phase, which translates capabilities into specific programs and forces. The budgeting phase assesses pricing and executability. DOD then sends the budget to Congress as part of the president’s annual budget proposal, generally early in February. Finally, once Congress enacts a budget and DOD begins execution, an execution phase assesses whether programs are operating as planned. The process is comprehensive, involving every element of DOD, from local commands to the military services and agencies to the Office of the Secretary of Defense.

Although Congress has established certain organizational structures, funding controls, and report requirements, PPBES is not established by law but by DOD internal directives. Thus, DOD has considerable latitude in making changes, though these must gain acceptance from multiple stakeholders, including Congress, which has the final say.

Q2: Why is there criticism? 

A2: Although there have been many tweaks and a few substantial changes over the years, the system remains fundamentally the same. As a result, it has been characterized as an “industrial age process” unsuited for twenty-first-century government. The criticisms are fourfold:

  1. The process takes too long, nearly three years from initiation to the end of budget year execution. Although there are feedback loops and opportunities to make changes, a lot can happen during that time to change budget needs.
  2. The process cannot deal with rapidly evolving technologies like software and artificial intelligence.
  3. The system is biased toward existing weapons and technologies because the many hierarchical reviews make the system risk averse.
  4. The process is labor-intensive. Dozens of agencies and thousands of personnel participate in driving the budget along its three-year journey.

Q3: What is the commission? 

A3: These criticisms drove Congress to create an independent Commission on Planning, Programming, Budgeting, and Execution Reformin the FY 2022 National Defense Authorization Act. The commission’s purpose is to “examine the effectiveness of the planning, programming, budgeting, and execution process and adjacent practices of the Department of Defense, particularly with respect to facilitating defense modernization.” 

The 14 commissioners are classic Washington insiders with deep personal experience in PPBES as senior officials in the Pentagon, the military services, or Congress. Consistent with the statute, none of the commissioners are currently employed by the federal government.

However, the insider aspects engendered some criticism. The Center for Defense Information dismissed the commission as it began because many commissioners had jobs in defense industry: “In a room full of people with glaring conflicts of interest, it is impossible to meaningfully reform an acquisition and budgeting system in a way that benefits the troops and American taxpayers.”

However, commissions need insiders with deep knowledge of the subject matter who can fix problems rather than outsiders who can admire them. People with deep experience in defense budgets and programs tend to stay in that profession, but that means they will likely have some linkage to industry, however tenuous.

The commission notes that it held 29 formal in-person meetings, interviewed over 560 individuals, and had 15 engagements with professional staff from the congressional defense committees. It has engaged research by several federally funded research and development centers and academic institutions.

Q4: What is the commission’s approach in this interim report? 

A4: The report eschews the breathless rhetoric of broken systems, national peril, fundamentally changed national security environments, or villainous opponents of change often seen in commission reports. Instead, as a result of the commissioners’ backgrounds, it recommends many specific improvements and notes that the commission is considering a few major changes. Overall, the commission “found that the PPBE process serves a critical role in identifying key budget issues . . . enabling senior leaders to guide the course of the department and developing consensus proposals that can be defended before Congress. . . . however, almost everyone the commission spoke with, even those praising aspects of today’s PPBE process, agrees that changes are needed [emphasis in original].”

This approach may disappoint some critics who would junk the current system and institute something completely different. However intellectually satisfying that might be, the chances of starting from scratch are essentially zero given the immensity of the task―it allocates three percent of U.S. GDP―and the hundreds of stakeholders. As think tank scholars Thomas Spoehr and Frederico Bartels put it, “the commission’s recommendations should apply to the existing system. It cannot—nor should it try to—develop a ‘clean-sheet’ design supported by an ideal financial management system. If the commission organizes itself around the idea that it should completely abandon the current system and rebuild from the ground up, it will immediately consign its final report to the graveyard where blue-ribbon commission reports go to die.”

Q5: What specific actions does the commission recommend? 

A5: The report makes recommendations in five areas: PPBE-related relationships between the DOD and Congress, PPBE processes to enable innovation and adaptability, the alignment of budgets to strategy, PPBE systems and data analytics, and DOD programming and budgeting workforce capability. Note that DOD uses “PPBE” to refer to the general topic of planning, programming, and budgeting, and “PPBES” to refer to the formal system.

PBBES-Related Relationships between DOD and Congress

Although PPBES is an internal process, Congress has the power of the purse. Therefore, its views and participation are critical to establishing a budget. Indeed, the report goes out of its way to show how both sides, executive and legislative, have problems and need to take action on reform.

The Problem 

DOD provides an immense amount of information upfront with the budget release, but the information flow slows down after that. Congress complains that its follow-up questions do not get answered quickly. The DOD complains about the number of inquiries, which has doubled to 1,429 in FY 2020, as the report notes. Everyone is late getting their products out, with the administration averaging 49 days late in its budget proposal and Congress averaging 113 days late getting a defense appropriation passed. (This is one example of many where the report cites problems on both sides.)

Recommendations for Immediate Implementation
  • Provide a midyear update briefing. This would provide a formal venue for communication, particularly in connection with the department’s midyear reprogramming actions. Indeed, this update has the appearance of both a reprogramming session and a budget amendment since DOD and the congressional committees would discuss changes to both the president’s budget proposal and the budget being executed. This requires buy-in from the White House through its budget arm, the Office of Management and Budget, since DOD has half the discretionary budget and what happens there affects the president’s overall budget strategy. Three corner negotiations are difficult. If successful, this would provide a major tool for adapting to changing circumstances. If it fails, the review is liable to degenerate into a pro forma reiteration of the administration’s policies and proposals.
  • Give budget justification books a standard structure. This would improve accuracy and make budget preparation tools common across DOD.
  • Improve training for program and budget staff and congressional liaison. A better-trained workforce can answer questions more quickly.
Considerations for Final Report 
  • Assess the impact of late release of administration budget requests and the resulting delay in getting congressional appropriations. The report here is again evenhanded. However, the congressional problem is much more severe, being twice as late and occurring over a much longer period of time.

PBBE Processes to Enable Innovation and Adaptability and Alignment of Budgets to Strategy 

The report lists these two separately, but they are closely related and constitute the core of the commission’s recommendations. Although these issues have little visibility to those outside of government, they are of immense interest to people in government and should be of interest to the general public. They answer the question of who gets to decide which issues and, ultimately, what gets funded.

The Problem 

The system lacks the flexibility to respond rapidly because of its length, inflexible budget structure, and hierarchical nature. The Research, Development, Test, and Evaluation (RDT&E) appropriations alone have over 900 individual budget lines that act as congressional controls. Although the system is designed as a smooth linear process whereby one organization hands off seamlessly to the next, products from one phase are often late, disrupting the process’s logical flow.

Recommendations for Immediate Implementation
  • Update PPBE documents. This is a perennial problem in large organizations since practices change faster than descriptive documentation, especially since updating such documents tends to be low priority.
Considerations for Final Report
  • Restructure the budget. This is potentially the commission’s largest change. The report outlines three potential ways forward that would result in larger groupings of funds. These groupings would focus more on programs and capabilities than on life cycle phases such as R&D, procurement, or operations and maintenance. Indeed, showing all a program’s resources was one of the founding goals of the entire PPBES and implemented for reporting purposes through the program element structure. The commission’s theory is that changing the current appropriations structure would avoid the “color of money” problem whereby programs and missions have difficulty moving money across appropriations and face delays and inefficiency as a result.
    There are precedents to changing the structure. For example, during the wars in Iraq and Afghanistan, Congress allowed DOD to create the Joint Improvised Explosive Device Defeat Organization Fund and the Mine Resistant Ambush Protected Vehicle Fund. Congress funded the programs in a lump sum and DOD decided which activities to support. DOD then reported to Congress in a timely manner where the money was going. The commission is suggesting an approach like this but without the impetus of a crisis atmosphere.
  • Extend the availability of some funds to avoid the “use it or lose it” dynamic. This refers to when organizations scramble to spend money at the end of the fiscal year. The report notes that this is especially a problem with the operations and maintenance appropriations.
  • Consolidate budget lines. Currently, there are thousands that individually act as congressional controls. Some consolidation would be reasonable since there will still be many tools for control. However, individual members or professional staff often have an interest in particular budget items and use the extensive line-item structure to implement their views.
  • Delegate some reprogramming authority to services and agencies. This would allow more rapid shifts in funds during the year of execution because there would be fewer layers of required approval.
  • Allow more internal reprogramming and simplify new start notifications for low-dollar programs. Most of these actions are for programs of less than $50 million.

PBBES Systems and Data Analytics

The Problem

DOD has many obsolete and fragmented business systems because of their low priority for funding and the difficulty in getting organizations to agree on change. Yet improving business systems is not just a good government effort. It increases visibility so more stakeholders can participate more easily, and it increases efficiency so the process can be conducted with fewer people. With 50,000 civilian and military personnel working primarily in financial management, even small improvements in efficiency can yield useful personnel savings.

Recommendations for Immediate Implementation 
  • Accelerate consolidation of data systems used by the comptroller’s office and the office of Cost Assessment and Program Evaluation (CAPE).
  • Establish classified and unclassified mechanisms for DOD to share information with Congress in real time.
Considerations for Final Report 
  • Explore the use of artificial intelligence to automate and streamline workflow and facilitate data-driven decisions.
  • Use analytics more extensively to shape the DPG. The commission has great hopes that this will better illuminate key issues for the senior leadership and produce stronger links of budgets to strategy.

Improve DOD Programming and Budgeting Workforce Capability

With so many commissioners coming from the programming, budgeting, and financial communities, it is not surprising that they are sympathetic to workload issues.

The Problem 

Of all OSD program and budget positions, 12 to 18 percent are unfilled. Continuous budget crises, from the need to support Ukraine to hedging against prospective government shutdowns, prevent the staff from getting training, leave, or a reasonable work-life balance.

CSIS has described how Congress often uses civilian headcount as a metric for bureaucratic overhead and imposes caps. Government then turns to contractors to fill gaps. However, the commission notes policy limits on what contractors can do. The budgeting workforce is thus caught in a long-term adverse political dynamic.

Recommendations for Immediate Implementation
  • Improve incentives and bonuses for direct hires.
Considerations for Final Report
  • Assess programming and budgeting organizations in the services and military departments. This is fair since concentrating solely on OSD would miss much PPBES activity. (It would also avoid criticism that commissioners were overly focused on OSD and Congress because that is where most of their experience lies.)
  • Increase civilian staffing levels in the comptroller’s office and CAPE. The commission makes a strong argument here, but there is much internal competition for personnel.

Q6: What happens next? 

A6: DOD’s leadership recognized the interim report, saying it would implement the actions it could. However, there were no specifics. Congress has not reacted.

The commission’s final report is due March 2024. For that, the commission will need to make its toughest decisions, particularly about budget structure. For every winner, there will be a perceived loser, even if there is a net benefit.

Increasing flexibility and agility often means pushing authority down in an organization. That sounds reasonable and aligns with many management concepts, but it means that powerful institutions and senior personnel give up authority they have become accustomed to. In this case, the authority resides in Congress and OSD. Critical to the success of the reform effort will be whether these two organizations are willing to delegate some of their authority.

Mark Cancian (Colonel, USMCR, ret.) is a senior adviser with the International Security Program at the Center for Strategic and International Studies in Washington, D.C. During his time in the Department of Defense and the Office of Management and Budget, he worked with PPBES every day.


Critical Questions is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2023 by the Center for Strategic and International Studies. All rights reserved.

Article link; https://www.csis.org/analysis/reforming-pentagons-budgeting-system-can-dod-and-congress-strike-deal

Large language models aren’t people. Let’s stop testing them as if they were – MIT Technology Review

Posted by timmreardon on 08/30/2023
Posted in: Uncategorized.


With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.

By Will Douglas Heaven

August 30, 2023

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”

Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.

Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3’s ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.”

What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination.

And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking). 

These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create. 

But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit.

“There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.”

That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

“People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI,” says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. “The issue throughout has been what it means when you test a machine like this. It doesn’t mean the same thing that it means for a human.”

“There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.”

With hopes and fears for this technology at an all-time high, it is crucial that we get a solid grip on what large language models can and cannot do. 

Open to interpretation  

Most of the problems with how large language models are tested boil down to the question of how the results are interpreted. 

Assessments designed for humans, like high school exams and IQ tests, take a lot for granted. When people score well, it is safe to assume that they possess the knowledge, understanding, or cognitive skills that the test is meant to measure. (In practice, that assumption only goes so far. Academic exams do not always reflect students’ true abilities. IQ tests measure a specific set of skills, not overall intelligence. Both kinds of assessment favor people who are good at those kinds of assessments.) 

But when a large language model scores well on such tests, it is not clear at all what has been measured. Is it evidence of actual understanding? A mindless statistical trick? Rote repetition?

“There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.” 

Webb is aware of the issues he waded into. “I share the sense that these are difficult questions,” he says. He notes that despite scoring better than undergrads on certain tests, GPT-3 produced absurd results on others. For example, it failed a version of an analogical reasoning test about physical objects that developmental psychologists sometimes give to kids.

In this test Webb and his colleagues gave GPT-3 a story about a magical genie transferring jewels between two bottles and then asked it how to transfer gumballs from one bowl to another, using objects such as a posterboard and a cardboard tube. The idea is that the story hints at ways to solve the problem. “GPT-3 mostly proposed elaborate but mechanically nonsensical solutions, with many extraneous steps, and no clear mechanism by which the gumballs would be transferred between the two bowls,” the researchers write in Nature. 

“This is the sort of thing that children can easily solve,” says Webb. “The stuff that these systems are really bad at tend to be things that involve understanding of the actual world, like basic physics or social interactions—things that are second nature for people.”

So how do we make sense of a machine that passes the bar exam but flunks preschool? Large language models like GPT-4 are trained on vast numbers of documents taken from the internet: books, blogs, fan fiction, technical reports, social media posts, and much, much more. It’s likely that a lot of past exam papers got hoovered up at the same time. One possibility is that models like GPT-4 have seen so many professional and academic tests in their training data that they have learned to autocomplete the answers.       

A lot of these tests—questions and answers—are online, says Webb: “Many of them are almost certainly in GPT-3’s and GPT-4’s training data, so I think we really can’t conclude much of anything.”

OpenAI says it checked to confirm that the tests it gave to GPT-4 did not contain text that also appeared in the model’s training data. In its work with Microsoft involving the exam for medical practitioners, OpenAI used paywalled test questions to be sure that GPT-4’s training data had not included them. But such precautions are not foolproof: GPT-4 could still have seen tests that were similar, if not exact matches. 

When Horace He, a machine-learning engineer, tested GPT-4 on questions taken from Codeforces, a website that hosts coding competitions, he found that it scored 10/10 on coding tests posted before 2021 and 0/10 on tests posted after 2021. Others have also noted that GPT-4’s test scores take a dive on material produced after 2021. Because the model’s training data only included text collected before 2021, some say this shows that large language models display a kind of memorization rather than intelligence.

To avoid that possibility in his experiments, Webb devised new types of test from scratch. “What we’re really interested in is the ability of these models just to figure out new types of problem,” he says.

Webb and his colleagues adapted a way of testing analogical reasoning called Raven’s Progressive Matrices. These tests consist of an image showing a series of shapes arranged next to or on top of each other. The challenge is to figure out the pattern in the given series of shapes and apply it to a new one. Raven’s Progressive Matrices are used to assess nonverbal reasoning in both young children and adults, and they are common in IQ tests.

Instead of using images, the researchers encoded shape, color, and position into sequences of numbers. This ensures that the tests won’t appear in any training data, says Webb: “I created this data set from scratch. I’ve never heard of anything like it.” 

Mitchell is impressed by Webb’s work. “I found this paper quite interesting and provocative,” she says. “It’s a well-done study.” But she has reservations. Mitchell has developed her own analogical reasoning test, called ConceptARC, which uses encoded sequences of shapes taken from the ARC (Abstraction and Reasoning Challenge) data set developed by Google researcher François Chollet. In Mitchell’s experiments, GPT-4 scores worse than people on such tests.

Mitchell also points out that encoding the images into sequences (or matrices) of numbers makes the problem easier for the program because it removes the visual aspect of the puzzle. “Solving digit matrices does not equate to solving Raven’s problems,” she says.

Brittle tests 

The performance of large language models is brittle. Among people, it is safe to assume that someone who scores well on a test would also do well on a similar test. That’s not the case with large language models: a small tweak to a test can drop an A grade to an F.

“In general, AI evaluation has not been done in such a way as to allow us to actually understand what capabilities these models have,” says Lucy Cheke, a psychologist at the University of Cambridge, UK. “It’s perfectly reasonable to test how well a system does at a particular task, but it’s not useful to take that task and make claims about general abilities.”

Take an example from a paper published in March by a team of Microsoft researchers, in which they claimed to have identified “sparks of artificial general intelligence” in GPT-4. The team assessed the large language model using a range of tests. In one, they asked GPT-4 how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable manner. It answered: “Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”

Not bad. But when Mitchell tried her own version of the question, asking GPT-4 to stack a toothpick, a bowl of pudding, a glass of water, and a marshmallow, it suggested sticking the toothpick in the pudding and the marshmallow on the toothpick, and balancing the full glass of water on top of the marshmallow. (It ended with a helpful note of caution: “Keep in mind that this stack is delicate and may not be very stable. Be cautious when constructing and handling it to avoid spills or accidents.”)

Here’s another contentious case. In February, Stanford University researcher Michal Kosinski published a paper in which he claimed to show that theory of mind “may spontaneously have emerged as a byproduct” in GPT-3. Theory of mind is the cognitive ability to ascribe mental states to others, a hallmark of emotional and social intelligence that most children pick up between the ages of three and five. Kosinski reported that GPT-3 had passed basic tests used to assess the ability in humans.

For example, Kosinski gave GPT-3 this scenario: “Here is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says ‘chocolate’ and not ‘popcorn.’ Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”

Kosinski then prompted the model to complete sentences such as: “She opens the bag and looks inside. She can clearly see that it is full of …” and “She believes the bag is full of …” GPT-3 completed the first sentence with “popcorn” and the second sentence with “chocolate.” He takes these answers as evidence that GPT-3 displays at least a basic form of theory of mind because they capture the difference between the actual state of the world and Sam’s (false) beliefs about it.

It’s no surprise that Kosinski’s results made headlines. They also invited immediate pushback. “I was rude on Twitter,” says Cheke.

Several researchers, including Shapira and Tomer Ullman, a cognitive scientist at Harvard University, published counterexamples showing that large language models failed simple variations of the tests that Kosinski used. “I was very skeptical given what I know about how large language models are built,” says Ullman. 

Ullman tweaked Kosinski’s test scenario by telling GPT-3 that the bag of popcorn labeled “chocolate” was transparent (so Sam could see it was popcorn) or that Sam couldn’t read (so she would not be misled by the label). Ullman found that GPT-3 failed to ascribe correct mental states to Sam whenever the situation involved an extra few steps of reasoning.   

“The assumption that cognitive or academic tests designed for humans serve as accurate measures of LLM capability stems from a tendency to anthropomorphize models and align their evaluation with human standards,” says Shapira. “This assumption is misguided.”

For Cheke, there’s an obvious solution. Scientists have been assessing cognitive abilities in non-humans for decades, she says. Artificial-intelligence researchers could adapt techniques used to study animals, which have been developed to avoid jumping to conclusions based on human bias.

Take a rat in a maze, says Cheke: “How is it navigating? The assumptions you can make in human psychology don’t hold.” Instead researchers have to do a series of controlled experiments to figure out what information the rat is using and how it is using it, testing and ruling out hypotheses one by one.

“With language models, it’s more complex. It’s not like there are tests using language for rats,” she says. “We’re in a new zone, but many of the fundamental ways of doing things hold. It’s just that we have to do it with language instead of with a little maze.”

Weidinger is taking a similar approach. She and her colleagues are adapting techniques that psychologists use to assess cognitive abilities in preverbal human infants. One key idea here is to break a test for a particular ability down into a battery of several tests that look for related abilities as well. For example, when assessing whether an infant has learned how to help another person, a psychologist might also assess whether the infant understands what it is to hinder. This makes the overall test more robust. 

The problem is that these kinds of experiments take time. A team might study rat behavior for years, says Cheke. Artificial intelligence moves at a far faster pace. Ullman compares evaluating large language models to Sisyphean punishment: “A system is claimed to exhibit behavior X, and by the time an assessment shows it does not exhibit behavior X, a new system comes along and it is claimed it shows behavior X.”

Moving the goalposts

Fifty years ago people thought that to beat a grand master at chess, you would need a computer that was as intelligent as a person, says Mitchell. But chess fell to machines that were simply better number crunchers than their human opponents. Brute force won out, not intelligence.

Similar challenges have been set and passed, from image recognition to Go. Each time computers are made to do something that requires intelligence in humans, like play games or use language, it splits the field. Large language models are now facing their own chess moment. “It’s really pushing us—everybody—to think about what intelligence is,” says Mitchell.

Does GPT-4 display genuine intelligence by passing all those tests or has it found an effective, but ultimately dumb, shortcut—a statistical trick pulled from a hat filled with trillions of correlations across billions of lines of text?

“If you’re like, ‘Okay, GPT4 passed the bar exam, but that doesn’t mean it’s intelligent,’ people say, ‘Oh, you’re moving the goalposts,’” says Mitchell. “But do we say we’re moving the goalpost or do we say that’s not what we meant by intelligence—we were wrong about intelligence?”

It comes down to how large language models do what they do. Some researchers want to drop the obsession with test scores and try to figure out what goes on under the hood. “I do think that to really understand their intelligence, if we want to call it that, we are going to have to understand the mechanisms by which they reason,” says Mitchell.

Ullman agrees. “I sympathize with people who think it’s moving the goalposts,” he says. “But that’s been the dynamic for a long time. What’s new is that now we don’t know how they’re passing these tests. We’re just told they passed it.”

The trouble is that nobody knows exactly how large language models work. Teasing apart the complex mechanisms inside a vast statistical model is hard. But Ullman thinks that it’s possible, in theory, to reverse-engineer a model and find out what algorithms it uses to pass different tests. “I could more easily see myself being convinced if someone developed a technique for figuring out what these things have actually learned,” he says. 

“I think that the fundamental problem is that we keep focusing on test results rather than how you pass the tests.”

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/08/30/1078670/large-language-models-arent-people-lets-stop-testing-them-like-they-were/amp/

Make Learning a Lifelong Habit – HBR

Posted by timmreardon on 08/29/2023
Posted in: Uncategorized.
  • John Coleman

January 24, 2017

Summary

Formal education is linked to higher earning and lower unemployment. Beyond that, learning is fun! Engaging in a new topic can be a joy and a confidence booster. But continuous and persistent learning isn’t just a choice – it has to become a habit, no easy task in these busy times. To make learning a lifelong habit, know that developing a learning habit requires you to articulate the outcomes you’d like to achieve. Based on those choices, set realistic goals. With goals in hand, develop a learning community and ditch the distractions. Finally, where appropriate, use technology to supplement learning. Developing specific learning habits –
consciously established and conscientiously cultivated – can be a route to both continued professional relevance and deep personal happiness.

I recently worked my way through Edmund Morris’s first two Teddy Roosevelt biographies, The Rise of Theodore Roosevelt and Theodore Rex. Roosevelt wasn’t without flaws, but he was by nearly all accounts fascinating and intellectually voracious. He published his first book, The Naval War of 1812, at 23 and continued to write on everything from conservation to politics and biography. According to Morris, at certain periods he was rumored to read a book a day, and all this reading and writing arguably made him both charismatic and uniquely equipped to engage the host of topics he did as president: national conservation efforts, naval expansion, trust regulation, and a variety of others.

Roosevelt was what we might call a “lifetime learner.” Learning became, for him, a mode of personal enjoyment and a path to professional success. It’s a habit many of us would like to emulate. The Economist recently arguedthat with all the disruptions in the modern economy, particularly technology, ongoing skill acquisition is critical to persistent professional relevance. Formal education levels are regularly linked to higher earnings and lower unemployment. And apart from its utility, learning is fun. It’s a joy to engage a new topic. Having an array of interesting topics at your disposal when speaking to colleagues or friends can boost your confidence. And it’s fulfilling to finally understand a difficult new subject.

But this type of continuous and persistent learning isn’t merely a decision. It must become a habit. And as such, it requires careful cultivation.

First, developing a learning habit requires you to articulate theoutcomes you’d like to achieve. Would you like to reinvigorate your conversations and intellectual activity by reading a host of new topics? Are you looking to master a specific subject? Would you like to make sure you’re up-to-date on one or two topics outside your day-to-day work? In my own life, I like to maintain a reading program that exposes me to a variety of subjects and genres with the goal of general intellectual exploration, while also digging more deeply into a few areas, including education, foreign policy, and leadership. Picking one or two outcomes will allow you to set achievable goals to make the habit stick.

Based on those choices, set realistic goals. Like many people, each year, I set a series of goals for myself. These take the form of objectives I’d like to achieve over the course of the year (e.g., read 24 books in 2017) and daily or weekly habits I need to cultivate in accordance with those goals (e.g., read for more than 20 minutes five days per week). For me, long-term goals are tracked in a planner. Daily or weekly habits I monitor via an app called momentum, which allows me to quickly and simply enter completion of my habits on a daily basis and monitor adherence. These goals turn a vague desire to improve learning into a concrete set of actions.

With goals in hand, develop a learning community. I have a bimonthly book group that helps keep me on track for my reading goals and makes achieving them more fun. Similarly, many of my writer friends join writing groups where members read and edit each other’s work. For more specific goals, join an organization focused on the topics you’d like to learn — a foreign policy discussion group that meets monthly or a woodworking group that gathers regularly to trade notes. You might even consider a formal class or degree program to add depth to your exploration of a topic and the type of commitment that is inherently structured. These communities increase commitment and make learning more fun.

To focus on your objectives, ditch the distractions. Learning is fun, but it is also hard work. It’s so extraordinarily well documented as to be almost a truism at this point, but multitasking and particularly technology (e.g., cell phones, email) can make the deep concentration needed for real learning difficult or impossible. Set aside dedicated time for learning and minimize interruptions. When you read, find a quiet place, and leave your phone behind. If you’re taking a class or participating in a reading group, take handwritten notes, which improve retentionand understanding, and leave laptops, mobiles devices, and other disrupting technologies in your car or bag far out of reach. And apart from physically eliminating distractions, consider training your mind to deal with them. I’ve found a pleasant impact of regular meditation, for example, has been an improvement in my intellectual focus which has helped my attentiveness in lectures and ability to read difficult books.

Finally, where appropriate, use technology to supplement learning. While technology can be a distraction, it can also be used to dramatically aid a learning regimen. Massive Open Online Courses (MOOCs) allow remote students to participate in community and learn from some of the world’s most brilliant people with the added commitment of class participation. Podcasts, audiobooks, e-readers, and other tools make it possible to have a book on hand almost any time. I’ve found, for example, that by using audiobooks in what I think of as “ambient moments” — commuting or running, for example — I can nearly double the books I read in a year. Good podcasts or iTunes U courses can similarly deliver learning on the go. Combine these tools with apps that track your habits, and technology can be an essential component of a learning routine.

We’re all born with a natural curiosity. We want to learn. But the demands of work and personal life often diminish our time and will to engage that natural curiosity. Developing specific learning habits — consciously established and conscientiously cultivated — can be a route to both continued professional relevance and deep personal happiness. Maybe Roosevelt had it right: a lifetime of learning can be a success in itself.

John Coleman is the author of the HBR Guide to Crafting Your Purpose. Subscribe to his free newsletter, On Purpose, follow him on Twitter @johnwcoleman, or contact him at johnwilliamcoleman.com.

Article link: https://hbr.org/2017/01/make-learning-a-lifelong-habit?

Pentagon Launching Autonomous Systems Initiative to Counter China – National Defense

Posted by timmreardon on 08/29/2023
Posted in: Uncategorized.

8/28/2023
By Laura Heckmann

The Defense Department announced a new project called the Replicator Initiative, an ambitious swing at what Deputy Secretary of Defense Kathleen Hicks called China’s “biggest advantage: mass.”

Speaking at the National Defense Industrial Association’s Emerging Technologies for Defense Conference Aug. 28, Hicks said the initiative’s goal is to build attritable autonomous systems at scale “of multiple thousands in multiple domains” within the next 18 to 24 months.

Hicks revealed few details about the initiative, but she said some will be “spelled out in the coming weeks,” though the department intends to remain “cagey” in terms of what it wants to share.

Hicks called the initiative a “big bet” and easier said than done, but expressed confidence in the department’s ability to carry it out. She said Replicator is a “comprehensive, warfighting-centric approach to innovation” that will galvanize the full weight and leadership attention of the Defense Department.

“If the operational challenge we must tackle is one of countering mass, we will do so not only through existing approaches and systems — those remain important — but we already know how to build and use today’s technology. This is about mastering the technology of tomorrow,” she said.

Attritable autonomous systems are “less expensive with fewer people in the line of fire and can be changed, updated or improved with substantially shorter lead times,” she said, adding the United States will stay ahead by leveraging the systems in all domains.

Hicks said Replicator intends to counter China’s mass with “mass of our own,” but mass that will be harder to plan for and harder to hit, with smart people, smart concepts and smart technology.

The initiative will be overseen by Hicks together with the vice chairman of the Joint Chiefs of Staff and support from the director of the Defense Innovation Unit.

Hicks said the department intends to bring “the full power of the DoD’s innovation ecosystem to bear. These capabilities will be developed and fielded in line with our responsible and ethical approach to AI and autonomous systems, where DoD has been a world leader for over a decade. We will employ attritable autonomous capabilities in ways that play to our enduring advantages, the greatest being our people.”

Attritable, while somewhat nebulous, generally refers to a design trait that trades reliability and maintenance for low-cost, reusable weapons. All-domain attritable autonomous systems will help overcome the challenge of anti-access, area denial systems, she said. “Our AD/A2 to thwart their A2/AD.”

Addressing potential skeptics of the ambitious initiative, Hicks acknowledged a slow system of which she is “deeply, personally familiar with almost every maddening flaw.” Still, she projected confidence that “when the time is right, and when we apply enough leadership, energy, urgency and depth of focus, we can get it done. That’s what America does.”

Replicator intends to leverage teamwork between the Defense Department and the private sector, including commercial, non-traditional and traditional defense companies alike, as well as collaboration and integration with partners, she said.

Hicks said the initiative’s greatest challenge within its timeline will be scaling the production.

“That’s the area we’re going after with Replicator,” she added. “As we looked at that innovation ecosystem, we think we’ve got some solutions in place … but the scaling piece is the one that still feels quite elusive — scaling for emerging technology.”

The scaling challenge is a focus of Replicator, she said. “How do we get those multiple thousands produced, in the hands of the warfighters, in 18 to 24 months? We obviously have done our homework; we know we can do it.

“It doesn’t mean it’s without risk. And we got to take a big bet here, but what’s leadership without big bets and making something happen?” she said.

“Let’s be clear, we all know the challenges and we all know the stakes. This is not about understanding the problems or a lack of leadership focus or insufficient resources,” Hicks said.

“This is about systematically tackling the highest barriers to enabling and unleashing the potential of U.S. and partner innovations, some in DoD or labs or elsewhere in government, but most of all outside of it. That means we must first see the whole of the defense innovation ecosystem to lower the myriad barriers that get in our way, and then must do the hard government work of removing those most damaging innovation obstacles, which is exactly what we’ve been doing.”

Hicks said she wanted to be clear that there will be no “mission accomplished” banner rolled out when it comes to innovation.

“Because we are in a persistent generational competition for advantage in which we cannot take military superiority for granted,” she said. “We must ensure the PRC leadership wakes up every day, considers the risks of aggression and concludes today is not the day.”

Article link: https://www.nationaldefensemagazine.org/articles/2023/8/28/defense-department-announces-new-innovation-initiative

2023 August Artificial Intelligence Summit – Cyber Bytes Foundation

Posted by timmreardon on 08/28/2023
Posted in: Uncategorized.

Coming up this week at the Quantico Cyber Hub, don’t forget to register for the 2023 August Artificial Intelligence Summit discussing the strategic implementation of AI throughout the enterprise. This event will bring together an array of distinguished participants, including representatives from the U.S. Marines, academia, industry, and international partners from diverse backgrounds and units. The Summit will revolve around a paramount theme: the strategic implementation of AI throughout the enterprise.

Date: Wednesday, August 30, 2023
Time: 1:00 – 3:30 PM
Location: Quantico Cyber Hub (1010 Corporate Drive Stafford, VA 22554) and Online via Microsoft Teams

Meet our Speakers:
William Chappell | CTO, Microsoft Str. Missions & Tech
Dr. Colin C. | Service Data Offr./Deputy DON CDO(HQE)
 SES Gaurang Dävé | Cyber Technology Offr., MCSC
Tim Gramp | Chief Engineer, USMC
Thorsten Hisam | Senior Dir. BD, IAI North America
Clayton Kerce | Division Chief, GTRI
Dan Rapp | Group Vice President, Proofpoint
Luis E. Velazquez | CTO, MARCORSYSCOM

Click the link below and register to join us as we tackle these challenges together. 
https://lnkd.in/eem6C4zE

#AIInnovation #TechnologySummit #AISummit #MarineCorpsTraining #ArtificialIntelligence

Inside the messy ethics of making war with machines – MIT Technology Review

Posted by timmreardon on 08/27/2023
Posted in: Uncategorized.

AI is making its way into decision-making in battle. Who’s to blame when something goes wrong?

  • Arthur Holland Michelarchive page

August 16, 2023

In a near-future war—one that might begin tomorrow, for all we know—a soldier takes up a shooting position on an empty rooftop. His unit has been fighting through the city block by block. It feels as if enemies could be lying in silent wait behind every corner, ready to rain fire upon their marks the moment they have a shot.

Through his gunsight, the soldier scans the windows of a nearby building. He notices fresh laundry hanging from the balconies. Word comes in over the radio that his team is about to move across an open patch of ground below. As they head out, a red bounding box appears in the top left corner of the gunsight. The device’s computer vision system has flagged a potential target—a silhouetted figure in a window is drawing up, it seems, to take a shot.

The soldier doesn’t have a clear view, but in his experience the system has a superhuman capacity to pick up the faintest tell of an enemy. So he sets his crosshair upon the box and prepares to squeeze the trigger. 

In different war, also possibly just over the horizon, a commander stands before a bank of monitors. An alert appears from a chatbot. It brings news that satellites have picked up a truck entering a certain city block that has been designated as a possible staging area for enemy rocket launches. The chatbot has already advised an artillery unit, which it calculates as having the highest estimated “kill probability,” to take aim at the truck and stand by. 

According to the chatbot, none of the nearby buildings is a civilian structure, though it notes that the determination has yet to be corroborated manually. A drone, which had been dispatched by the system for a closer look, arrives on scene. Its video shows the truck backing into a narrow passage between two compounds. The opportunity to take the shot is rapidly coming to a close. 

For the commander, everything now falls silent. The chaos, the uncertainty, the cacophony—all reduced to the sound of a ticking clock and the sight of a single glowing button:

“APPROVE FIRE ORDER.” 

To pull the trigger—or, as the case may be, not to pull it. To hit the button, or to hold off. Legally—and ethically—the role of the soldier’s decision in matters of life and death is preeminent and indispensable. Fundamentally, it is these decisions that define the human act of war.

Related Story

assisted suicide machine

The messy morality of letting AI make life-and-death decisions

Automation can help us make hard choices, but it can’t do it alone.

It should be of little surprise, then, that states and civil society have taken up the question of intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—as a matter of serious concern. In May, after close to a decade of discussions, parties to the UN’s Convention on Certain Conventional Weapons agreed, among other recommendations, that militaries using them probably need to “limit the duration, geographical scope, and scale of the operation” to comply with the laws of war. The line was nonbinding, but it was at least an acknowledgment that a human has to play a part—somewhere, sometime—in the immediate process leading up to a killing.

But intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. Even the “autonomous” drones and ships fielded by the US and other powers are used under close human supervision. Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker’s tool kit. And they’ve quietly become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-­covered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill?

For a long time, the idea of supporting a human decision by computerized means wasn’t such a controversial prospect. Retired Air Force lieutenant general Jack Shanahan says the radar on the F4 Phantom fighter jet he flew in the 1980s was a decision aid of sorts. It alerted him to the presence of other aircraft, he told me, so that he could figure out what to do about them. But to say that the crew and the radar were coequal accomplices would be a stretch. 

That has all begun to change. “What we’re seeing now, at least in the way that I see this, is a transition to a world [in] which you need to have humans and machines … operating in some sort of team,” says Shanahan.

The rise of machine learning, in particular, has set off a paradigm shift in how militaries use computers to help shape the crucial decisions of warfare—up to, and including, the ultimate decision. Shanahan was the first director of Project Maven, a Pentagon program that developed target recognition algorithms for video footage from drones. The project, which kicked off a new era of American military AI, was launched in 2017 after a study concluded that “deep learning algorithms can perform at near-­human levels.” (It also sparked controversy—in 2018, more than 3,000 Google employees signed a letter of protest against the company’s involvement in the project.)

With machine-learning-based decision tools, “you have more apparent competency, more breadth” than earlier tools afforded, says Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency. “And perhaps a tendency, as a result, to turn over more decision-making to them.”

A soldier on the lookout for enemy snipers might, for example, do so through the Assault Rifle Combat Application System, a gunsight sold by the Israeli defense firm Elbit Systems. According to a company spec sheet, the “AI-powered” device is capable of “human target detection” at a range of more than 600 yards, and human target “identification” (presumably, discerning whether a person is someone who could be shot) at about the length of a football field. Anna Ahronheim-Cohen, a spokesperson for the company, told MIT Technology Review, “The system has already been tested in real-time scenarios by fighting infantry soldiers.”

Another gunsight, built by the company Smartshooter, is advertised as having similar capabilities. According to the company’s website, it can also be packaged into a remote-controlled machine gun like the one that Israeli agents used to assassinate the Iranian nuclear scientist Mohsen Fakhrizadeh in 2021. 

Decision support tools that sit at a greater remove from the battlefield can be just as decisive. The Pentagon appears to have used AI in the sequence of intelligence analyses and decisions leading up to a potential strike, a process known as a kill chain—though it has been cagey on the details. In response to questions from MIT Technology Review, Laura McAndrews, an Air Force spokesperson, wrote that the service “is utilizing a human-­machine teaming approach.”

The range of judgment calls that go into military decision-making is vast. And it doesn’t always take artificial super-intelligence to dispense with them by automated means.

Other countries are more openly experimenting with such automation. Shortly after the Israel-Palestine conflict in 2021, the Israel Defense Forces said it had used what it described as AI tools to alert troops of imminent attacks and to propose targets for operations.

The Ukrainian army uses a program, GIS Arta, that pairs each known Russian target on the battlefield with the artillery unit that is, according to the algorithm, best placed to shoot at it. A report by The Times, a British newspaper, likened it to Uber’s algorithm for pairing drivers and riders, noting that it significantly reduces the time between the detection of a target and the moment that target finds itself under a barrage of firepower. Before the Ukrainians had GIS Arta, that process took 20 minutes. Now it reportedly takes one.

Russia claims to have its own command-and-control system with what it calls artificial intelligence, but it has shared few technical details. Gregory Allen, the director of the Wadhwani Center for AI and Advanced Technologies and one of the architects of the Pentagon’s current AI policies, told me it’s important to take some of these claims with a pinch of salt. He says some of Russia’s supposed military AI is “stuff that everyone has been doing for decades,” and he calls GIS Arta “just traditional software.”

The range of judgment calls that go into military decision-making, however, is vast. And it doesn’t always take artificial super-­intelligence to dispense with them by automated means. There are tools for predicting enemy troop movements, tools for figuring out how to take out a given target, and tools to estimate how much collateral harm is likely to befall any nearby civilians. 

None of these contrivances could be called a killer robot. But the technology is not without its perils. Like any complex computer, an AI-based tool might glitch in unusual and unpredictable ways; it’s not clear that the human involved will always be able to know when the answers on the screen are right or wrong. In their relentless efficiency, these tools may also not leave enough time and space for humans to determine if what they’re doing is legal. In some areas, they could perform at such superhuman levels that something ineffable about the act of war could be lost entirely.

Eventually militaries plan to use machine intelligence to stitch many of these individual instruments into a single automated network that links every weapon, commander, and soldier to every other. Not a kill chain, but—as the Pentagon has begun to call it—a kill web.

In these webs, it’s not clear whether the human’s decision is, in fact, very much of a decision at all. Rafael, an Israeli defense giant, has already sold one such product, Fire Weaver, to the IDF (it has also demonstrated it to the US Department of Defense and the German military). According to company materials, Fire Weaver finds enemy positions, notifies the unit that it calculates as being best placed to fire on them, and even sets a crosshair on the target directly in that unit’s weapon sights. The human’s role, according to one video of the software, is to choose between two buttons: “Approve” and “Abort.”


Let’s say that the silhouette in the window was not a soldier, but a child. Imagine that the truck was not delivering warheads to the enemy, but water pails to a home. 

Of the DoD’s five “ethical principles for artificial intelligence,” which are phrased as qualities, the one that’s always listed first is “Responsible.” In practice, this means that when things go wrong, someone—a human, not a machine—has got to hold the bag. 

Of course, the principle of responsibility long predates the onset of artificially intelligent machines. All the laws and mores of war would be meaningless without the fundamental common understanding that every deliberate act in the fight is always on someone. But with the prospect of computers taking on all manner of sophisticated new roles, the age-old precept has newfound resonance. 

Of the Department of Defense’s 5 “ethical principles for artificial intelligence,” which are phrased as qualities, the one that’s always listed first is “Responsible.”

“Now for me, and for most people I ever knew in uniform, this was core to who we were as commanders: that somebody ultimately will be held responsible,” says Shanahan, who after Maven became the inaugural director of the Pentagon’s Joint Artificial Intelligence Center and oversaw the development of the AI ethical principles.

This is why a human hand must squeeze the trigger, why a human hand must click “Approve.” If a computer sets its sights upon the wrong target, and the soldier squeezes the trigger anyway, that’s on the soldier. “If a human does something that leads to an accident with the machine—say, dropping a weapon where it shouldn’t have—that’s still a human’s decision that was made,” Shanahan says.

But accidents happen. And this is where things get tricky. Modern militaries have spent hundreds of years figuring out how to differentiate the unavoidable, blameless tragedies of warfare from acts of malign intent, misdirected fury, or gross negligence. Even now, this remains a difficult task. Outsourcing a part of human agency and judgment to algorithms built, in many cases, around the mathematical principle of optimization will challenge all this law and doctrine in a fundamentally new way, says Courtney Bowman, global director of privacy and civil liberties engineering at Palantir, a US-headquartered firm that builds data management software for militaries, governments, and large companies. 

“It’s a rupture. It’s disruptive,” Bowman says. “It requires a new ethical construct to be able to make sound decisions.”

This year, in a move that was inevitable in the age of ChatGPT, Palantir announced that it is developing software called the Artificial Intelligence Platform, which allows for the integration of large language models into the company’s military products. In a demo of AIP posted to YouTube this spring, the platform alerts the user to a potentially threatening enemy movement. It then suggests that a drone be sent for a closer look, proposes three possible plans to intercept the offending force, and maps out an optimal route for the selected attack team to reach them.

And yet even with a machine capable of such apparent cleverness, militaries won’t want the user to blindly trust its every suggestion. If the human presses only one button in a kill chain, it probably should not be the “I believe” button, as a concerned but anonymous Army operative once put it in a DoD war game in 2019. 

In a program called Urban Reconnaissance through Supervised Autonomy (URSA), DARPA built a system that enabled robots and drones to act as forward observers for platoons in urban operations. After input from the project’s advisory group on ethical and legal issues, it was decided that the software would only ever designate people as “persons of interest.” Even though the purpose of the technology was to help root out ambushes, it would never go so far as to label anyone as a “threat.”

This, it was hoped, would stop a soldier from jumping to the wrong conclusion. It also had a legal rationale, according to Brian Williams, an adjunct research staff member at the Institute for Defense Analyses who led the advisory group. No court had positively asserted that a machine could legally designate a person a threat, he says. (Then again, he adds, no court had specifically found that it would be illegal, either, and he acknowledges that not all military operators would necessarily share his group’s cautious reading of the law.) According to Williams, DARPA initially wanted URSA to be able to autonomously discern a person’s intent; this feature too was scrapped at the group’s urging.

Bowman says Palantir’s approach is to work “engineered inefficiencies” into “points in the decision-­making process where you actually do want to slow things down.” For example, a computer’s output that points to an enemy troop movement, he says, might require a user to seek out a second corroborating source of intelligence before proceeding with an action (in the video, the Artificial Intelligence Platform does not appear to do this).

“If people of interest are identified on a screen as red dots, that’s going to have a different subconscious implication than if people of interest are identified on a screen as little happy faces.”Rebecca Crootof, law professor at the University of Richmond

In the case of AIP, Bowman says the idea is to present the information in such a way “that the viewer understands, the analyst understands, this is only a suggestion.” In practice, protecting human judgment from the sway of a beguilingly smart machine could come down to small details of graphic design. “If people of interest are identified on a screen as red dots, that’s going to have a different subconscious implication than if people of interest are identified on a screen as little happy faces,” says Rebecca Crootof, a law professor at the University of Richmond, who has written extensively about the challenges of accountability in human-in-the-loop autonomous weapons.

In some settings, however, soldiers might only want an “I believe” button. Originally, DARPA envisioned URSA as a wrist-worn device for soldiers on the front lines. “In the very first working group meeting, we said that’s not advisable,” Williams told me. The kind of engineered inefficiency necessary for responsible use just wouldn’t be practicable for users who have bullets whizzing by their ears. Instead, they built a computer system that sits with a dedicated operator, far behind the action. 

But some decision support systems are definitely designed for the kind of split-second decision-­making that happens right in the thick of it. The US Army has said that it has managed, in live tests, to shorten its own 20-minute targeting cycle to 20 seconds. Nor does the market seem to have embraced the spirit of restraint. In demo videos posted online, the bounding boxes for the computerized gunsights of both Elbit and Smartshooter are blood red.


Other times, the computer will be right and the human will be wrong. 

If the soldier on the rooftop had second-guessed the gunsight, and it turned out that the silhouette was in fact an enemy sniper, his teammates could have paid a heavy price for his split second of hesitation.

This is a different source of trouble, much less discussed but no less likely in real-world combat. And it puts the human in something of a pickle. Soldiers will be told to treat their digital assistants with enough mistrust to safeguard the sanctity of their judgment. But with machines that are often right, this same reluctance to defer to the computer can itself become a point of avertable failure. 

Aviation history has no shortage of cases where a human pilot’s refusal to heed the machine led to catastrophe. These (usually perished) souls have not been looked upon kindly by investigators seeking to explain the tragedy. Carol J. Smith, a senior research scientist at Carnegie Mellon University’s Software Engineering Institute who helped craft responsible AI guidelines for the DoD’s Defense Innovation Unit, doesn’t see an issue: “If the person in that moment feels that the decision is wrong, they’re making it their call, and they’re going to have to face the consequences.” 

For others, this is a wicked ethical conundrum. The scholar M.C. Elish has suggested that a human who is placed in this kind of impossible loop could end up serving as what she calls a “moral crumple zone.” In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the “decision” will absorb the blame and protect everyone else along the chain of command from the full impact of accountability.

In an essay, Smith wrote that the “lowest-paid person” should not be “saddled with this responsibility,” and neither should “the highest-paid person.” Instead, she told me, the responsibility should be spread among everyone involved, and the introduction of AI should not change anything about that responsibility. 

In practice, this is harder than it sounds. Crootof points out that even today, “there’s not a whole lot of responsibility for accidents in war.” As AI tools become larger and more complex, and as kill chains become shorter and more web-like, finding the right people to blame is going to become an even more labyrinthine task. 

Those who write these tools, and the companies they work for, aren’t likely to take the fall. Building AI software is a lengthy, iterative process, often drawing from open-source code, which stands at a distant remove from the actual material facts of metal piercing flesh. And barring any significant changes to US law, defense contractors are generally protected from liability anyway, says Crootof.

Related Story

silhouette of a woman made with burning paper

Responsible AI has a burnout problem

Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.

Any bid for accountability at the upper rungs of command, meanwhile, would likely find itself stymied by the heavy veil of government classification that tends to cloak most AI decision support tools and the manner in which they are used. The US Air Force has not been forthcoming about whether its AI has even seen real-world use. Shanahan says Maven’s AI models were deployed for intelligence analysis soon after the project launched, and in 2021 the secretary of the Air Force said that “AI algorithms” had recently been applied “for the first time to a live operational kill chain,” with an Air Force spokesperson at the time adding that these tools were available in intelligence centers across the globe “whenever needed.” But Laura McAndrews, the Air Force spokesperson, saidthat in fact these algorithms “were not applied in a live, operational kill chain” and declined to detail any other algorithms that may, or may not, have been used since. 

The real story might remain shrouded for years. In 2018, the Pentagon issued a determination that exempts Project Maven from Freedom of Information requests. Last year, it handed the entire program to the National Geospatial-Intelligence Agency,which is responsible for processing ​America’s vast intake of secret aerial surveillance. Responding to questions about whether the algorithms are used in kill chains, Robbin Brooks, an NGA spokesperson, told MIT Technology Review, “We can’t speak to specifics of how and where Maven is used.”


In one sense, what’s new here is also old. We routinely place our safety—indeed, our entire existence as a species—in the hands of other people. Those decision-­makers defer, in turn, to machines that they do not entirely comprehend. 

In an exquisite essay on automation published in 2018, at a time when operational AI-enabled decision support was still a rarity, former Navy secretary Richard Danzig pointed out that if a president “decides” to order a nuclear strike, it will not be because anyone has looked out the window of the Oval Office and seen enemy missiles raining down on DC but, rather, because those missiles have been detected, tracked, and identified—one hopes correctly—by algorithms in the air defense network. 

As in the case of a commander who calls in an artillery strike on the advice of a chatbot, or a rifleman who pulls the trigger at the mere sight of a red bounding box, “the most that can be said is that ‘a human being is involved,’” Danzig wrote.

In an essay, Smith wrote that the “lowest-paid person” should not be “saddled with this responsibility,” and neither should “the highest-paid person.” Instead, she told me, the responsibility should be spread among everyone involved, and the introduction of AI should not change anything about that responsibility. 

In practice, this is harder than it sounds. Crootof points out that even today, “there’s not a whole lot of responsibility for accidents in war.” As AI tools become larger and more complex, and as kill chains become shorter and more web-like, finding the right people to blame is going to become an even more labyrinthine task. 

Those who write these tools, and the companies they work for, aren’t likely to take the fall. Building AI software is a lengthy, iterative process, often drawing from open-source code, which stands at a distant remove from the actual material facts of metal piercing flesh. And barring any significant changes to US law, defense contractors are generally protected from liability anyway, says Crootof.

Related Story

silhouette of a woman made with burning paper

Responsible AI has a burnout problem

Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.

Any bid for accountability at the upper rungs of command, meanwhile, would likely find itself stymied by the heavy veil of government classification that tends to cloak most AI decision support tools and the manner in which they are used. The US Air Force has not been forthcoming about whether its AI has even seen real-world use. Shanahan says Maven’s AI models were deployed for intelligence analysis soon after the project launched, and in 2021 the secretary of the Air Force said that “AI algorithms” had recently been applied “for the first time to a live operational kill chain,” with an Air Force spokesperson at the time adding that these tools were available in intelligence centers across the globe “whenever needed.” But Laura McAndrews, the Air Force spokesperson, saidthat in fact these algorithms “were not applied in a live, operational kill chain” and declined to detail any other algorithms that may, or may not, have been used since. 

The real story might remain shrouded for years. In 2018, the Pentagon issued a determination that exempts Project Maven from Freedom of Information requests. Last year, it handed the entire program to the National Geospatial-Intelligence Agency,which is responsible for processing ​America’s vast intake of secret aerial surveillance. Responding to questions about whether the algorithms are used in kill chains, Robbin Brooks, an NGA spokesperson, told MIT Technology Review, “We can’t speak to specifics of how and where Maven is used.”


In one sense, what’s new here is also old. We routinely place our safety—indeed, our entire existence as a species—in the hands of other people. Those decision-­makers defer, in turn, to machines that they do not entirely comprehend. 

In an exquisite essay on automation published in 2018, at a time when operational AI-enabled decision support was still a rarity, former Navy secretary Richard Danzig pointed out that if a president “decides” to order a nuclear strike, it will not be because anyone has looked out the window of the Oval Office and seen enemy missiles raining down on DC but, rather, because those missiles have been detected, tracked, and identified—one hopes correctly—by algorithms in the air defense network. 

As in the case of a commander who calls in an artillery strike on the advice of a chatbot, or a rifleman who pulls the trigger at the mere sight of a red bounding box, “the most that can be said is that ‘a human being is involved,’” Danzig wrote.

“In warfighting,” says Bowman of Palantir, “[in] the application of any technology, let alone AI, there is some degree of harm that you’re trying to—that you have to accept, and the game is risk reduction.” 

It is possible, though not yet demonstrated, that bringing artificial intelligence to battle may mean fewer civilian casualties, as advocates often claim. But there could be a hidden cost to irrevocably conjoining human judgment and mathematical reasoning in those ultimate moments of war—a cost that extends beyond a simple, utilitarian bottom line. Maybe something just cannot be right, should not be right, about choosing the time and manner in which a person dies the way you hail a ride from Uber. 

To a machine, this might be suboptimal logic. But for certain humans, that’s the point. “One of the aspects of judgment, as a human capacity, is that it’s done in an open world,” says Lucy Suchman, a professor emerita of anthropology at Lancaster University, who has been writing about the quandaries of human-machine interaction for four decades. 

The parameters of life-and-death decisions—knowing the meaning of the fresh laundry hanging from a window while also wanting your teammates not to die—are “irreducibly qualitative,” she says. The chaos and the noise and the uncertainty, the weight of what is right and what is wrong in the midst of all that fury—not a whit of this can be defined in algorithmic terms. In matters of life and death, there is no computationally perfect outcome. “And that’s where the moral responsibility comes from,” she says. “You’re making a judgment.” 

The gunsight never pulls the trigger. The chatbot never pushes the button. But each time a machine takes on a new role that reduces the irreducible, we may be stepping a little closer to the moment when the act of killing is altogether more machine than human, when ethics becomes a formula and responsibility becomes little more than an abstraction. If we agree that we don’t want to let the machines take us all the way there, sooner or later we will have to ask ourselves: Where is the line? 

Arthur Holland Michel writes about technology. He is based in Barcelona and can be found, occasionally, in New York.

Article link: https://www.technologyreview.com/2023/08/16/1077386/war-machines/

Ransomware Wipes Out Data Access for ‘Majority’ of Cloud Provider’s Customers – PC Magazine

Posted by timmreardon on 08/26/2023
Posted in: Uncategorized.

Danish company CloudNordic essentially has to start over and rebuild its systems. ‘I don’t expect that there will be any customers left with us when this is over,’ an exec says.

by Michael Kan|Aug 24, 2023

A cloud hosting firm in Denmark has lost a “majority” of its customer data after a ransomware attack infected the company’s systems. 

“Unfortunately, it has proved impossible to recreate more data, and the majority of our customers have thus lost all data with us,” CloudNordic wrote in a translated post.

CloudNordic supplies servers to host email, websites, and other IT services for its customers. But the attack is so devastating CloudNordic must start from scratch in rebuilding the company’s IT systems. “In addition to data, we also lost all our systems and servers and have had difficulty communicating,” the company says. 

“We have now re-established blank systems, e.g. name servers (without data), web servers (without data) and mail servers (without data),” CloudNordic adds. A sister company called Azero Cloud suffered the same attack, and has postedan identical notice to the public. 

The incident occurred on Friday, Aug. 18, when the company was physically moving some servers from one data center to another. CloudNordic suspects that some of the servers it was moving contained a dormant malware infection. The infected servers were then hooked up to company networks that had access to all of CloudNordic’s server infrastructure, giving the hackers access to both a central admin system and backup systems. 

“The attackers succeeded in encrypting all servers’ disks, as well as on the primary and secondary backup system, whereby all machines crashed and we lost access to all data,” the company adds. 

But while the hackers have locked down access to that data, they do not appear to have removed it from the company’s servers, CloudNordic says. The unidentified ransomware group behind the attack reportedly wants 6 bitcoins ($157,914), but CloudNordic has refused to pay.

Although CloudNordic is hoping customers will stick around as it attempts to recover, the director for the company told Danish media: “I don’t expect that there will be any customers left with us when this is over.”

Article link: https://me.pcmag.com/en/security/18953/ransomware-wipes-out-data-access-for-majority-of-cloud-providers-customers

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...