G tech giant IBM is launching a counterstrike in the industry’s suddenly-hot AI fight with today’s announcement of Watsonx, Axios’ Ryan Heath reports.
The big picture: Business-focused IBM claims its latest AI offering, set to launch in July, provides more accurate answers and takes a more responsible approach than rivals.
Microsoft, OpenAI and Google are rushing to lock down potentially massive new consumer markets for generative AI.
IBM is instead leaning into helping other companies implement their AI via a “data model factory” that offers IBM clients products tuned for their specialties in domains like language, code, chemistry and geospatial data.
Watsonx, in partnership with startup Hugging Face, incorporates open-source models; uses narrower, carefully culled datasets; and provides a “toolkit for governance.”
IBM’s top execs threw shadeon rivals at a Monday Watsonx preview.
Dario Gil, IBM’s head of research, said systems like ChatGPT are “not ready for primetime” thanks to “all sorts of random and made-up facts.”
CEO ArvindKrishna, who wasn’t at last week’s White House AI meeting, seemed pleased to be out of the firing line. He told Axios it gave him more time to court clients “who care a lot about accuracy.”
Between the lines: IBM knows about failed AI hype. It won headlines when its original Watson won Jeopardy in 2011, but after that the company’s revenue declined for 10 consecutive years — leaving Watsonx with a lot to prove.
Krishna addressed a range of AI topics, including…
America’s AI regulation debate:“The conversation has been lagging.” Krishna claimed credit for IBM helping to draft the EU’s upcoming rules, which focus on regulating high risk uses of AI.
New work category: “AI ops” covering activities like coding assistance and supply chain management.
Humans aren’t replaceable: “The systems still have years to go,” when it comes to “trying to replace a human being in their completeness.”
There’s no explainable AI: “Anybody who claims that a large AI model is explainable is not being completely truthful. They are not explainable in the sense of reasoning and logic.” But AI can transparently show its source data, and third parties can measure whether its answers show “bias with respect to gender, or age or ZIP code.”
What they’re saying: “AI may not replace managers, but managers that use AI will replace the managers that do not,” per Rob Thomas, IBM’s chief commercial officer.
The other side: IBM isn’t the only AI provider to claim the mantle of responsibility nor the only one targeting businesses, rather than consumers.
OpenAI has faced criticism for pushing GPT out to the world, but the public has embraced it. Meanwhile, even as it sets a fast industry pace, OpenAI — like many competitors — also touts its ethical scruples, pointing to its pre-deployment risk analysis and publicly grappling with the challenges of reducing harms.
Google took heat first for being too cautious in withholding the fruits of its AI research —then for an about-face that has seen it scrambling to ship generative-AI products.
Welcome to the Indian Health Service (IHS) Health Information Technology (IT) Modernization Program website. The purpose of these pages is to provide timely information about the agency’s plans and progress to modernize our health IT systems.
All IHS health care facilities, and many facilities operated by tribes and urban Indian organizations, currently use the Resource and Patient Management System (RPMS). RPMS is a comprehensive suite of applications that supports everything from patient registration to insurance billing, and includes the patient’s Electronic Health Record (EHR).
RPMS has been in use, and continuously developed by IHS, for nearly 40 years. The technology underlying RPMS is outdated, and it is very challenging for each organizational unit to maintain its own RPMS database. Fortunately, health information technology has come a long way in the past 40 years, and the IHS is in the process of modernizing how these critical systems are acquired and managed in support of health care.
In recent years the IHS studied the best health IT options to deliver consistent, integrated, high-quality care. This analysis included consultation with tribes and conferring with urban Indian organizations. In 2021, the IHS published a decision [PDF] that announced the plan to replace RPMS with commercially available solutions, including a new EHR. That decision began the work now called the IHS Health IT Modernization Program. The Program, in collaboration with tribes and UIOs, will buy, build, deliver, and maintain a new enterprise EHR solution that will:
Minimize software development and technical support burdenboth at IHS Headquarters and for facilities across the country
Focus on system optimization and usability for end-users
Promote standardization and best practices nationwide
Liberate data so it is accessible across the enterprise by clinicians, patients, and partners alike to improve patient outcomes
We are just in the early stages of this exciting, transformational initiative. Please check the links on the left to learn more. You can also sign up to receive updates through the Health IT Modernization listserv at this link.
The agency has made progress on artificial intelligence management, but still has work to do to meet governmentwide requirements.
NASA’s Inspector General found that while the agency has made progress in its artificial intelligence management, more work still needs to be done, according to a reportreleased on Wednesday.
NASA has used AI across a wide variety of agency programs, such as storm prediction tools, the Mars Perseverance rover and elements of the Artemis missions, among other things. As a result of the large use cases for AI, the OIG noted that its regulation and management for cybersecurity risks and threats is critical. The watchdog looked at NASA’s progress to develop AI governance and standards to help assess its cybersecurity controls for AI data and technology. The OIG found that, despite some effort to manage the agency’s AI, NASA fell short by not having a single definition of AI. The report also noted that the agency’s AI classification and tracking are insufficient to fully address current and future federal requirements and AI cybersecurity concerns.
Specifically, the shortcomings could impact NASA’s ability to manage its AI and adhere to several executive orders. Moreover, the lack of a central, standardized process could put the agency at an increased risk for cybersecurity threats.
The watchdog noted that NASA has made efforts to improve its AI management. For example, the agency established the NASA Framework for Ethical Use of Artificial Intelligence in April 2021, pulling from principles of leading AI organizations to guide ethical AI decisions, initial guidance for the agency, AI advice and questions for AI practitioners’ consideration.
In September 2022, NASA also developed the NASA Responsible AI Plan, which identified NASA’s Responsible AI officials and detailed how the agency would implement requirements of a 2020 executive order on trustworthy AI. Specifically, this includes capturing and reporting use case inventories, creating oversight of AI projects to ensure continuous monitoring efforts and engaging the AI community on NASA’s ethical AI standards and implementation.
However, the OIG found that, despite these planning documents, NASA has not adopted a standard AI definition. The agency is using three definitions in different overarching documents: the NASA Framework for the Ethical Use of Artificial Intelligence, NASA’s Responsible AI Plan—which uses the executive order definition—and NASA’s internal AI machine learning sharepoint collaboration website.
“While all three definitions are similar, subtleties and nuances in each can alter whether a particular technology is properly considered AI,” the report stated.
The OIG added that NASA personnel reported AI based on their own understanding of it as opposed to relying on these definitions.
This lack of a standard definition means NASA does not have a way to “accurately classify and track AI or to identify AI expenditures within the agency’s financial system, making it difficult for the agency to meet federal requirements to monitor its use of AI,” according to the report.
The OIG also found that NASA’s AI is often managed as part of a larger project instead of on its own, which means it is not separately tracked. This has affected the agency’s response to the 2020 trustworthy AI executive order—which called for agencies to create an AI inventory—as well as its response to a 2019 executive order on maintaining U.S. AI leadership—which called for the gathering of an estimated annual budget for AI expenditures. In order to create the inventory and budget, NASA utilizes a multi-faceted data call to collect individual responses from AI users—something the OIG noted “takes significant time to compile, validate and vet, and runs the risk of clerical errors that could be significantly lessened using an automated process.”
Furthermore, while NASA officials believe the agency’s processes—such as monitoring requirements and making sure it safeguards AI systems from cyber threats—should be sufficient to address AI security concerns, previous OIG audits have revealed that NASA’s fragmented IT management puts it at an increased risk from cyber threats. According to the OIG, NASA also faces more challenges to implement potential future federal AI cybersecurity controls because of the lack of an AI-specific mechanism or way to appropriately categorize and classify AI within its record systems.
The OIG recommended NASA establish a standard definition for AI that “harmonizes” the three existing definitions; make sure the standard definition is used to identify, update and maintain NASA’s AI use case inventory; identify a classification system to help quickly apply federal cybersecurity control and monitoring practice requirements; and create a way to track budgets and expenditures for AI use case inventory.
NASA agreed or partially agreed with the watchdog’s recommendations and outlined how it would address them.
Speed, agility, and a tolerance for failure are required to enable greater defense innovation adoption for the United States to be successful in this era of great power competition. With Russia’s brazen invasion of Ukraine and China’s increasingly hostile actions in the Indo-Pacific, both bending the international rules-based order to fit their malign interests, confronting innovation challenges in defense will require a fresh approach.
A key element of America’s strategy must be to counter our adversaries’ rapid acquisition and exploitation of leading technologies such as artificial intelligence, robotics, autonomy, and directed energy. To make a major step change when it comes to defense modernization, the Defense Department needs greater flexibility, tools, and a sustained political push from Capitol Hill to use them.
We propose an experiment in budget flexibility and a rethinking of Pentagon program management deep enough to transform the decades-old defense acquisition process. One approach would be to pick a handful of outstanding program executive officers, assign each a capability, and allow them to pursue it with a portfolio of efforts that, unlike today’s programs, can be modified with agility. Enable them to foster an industrial base that can furnish competition and secure supply chains and, if the new portfolio model proves successful, scale the approach across the services.
These are among the recommendations of the Commission for Defense Innovation Adoption, an Atlantic Council project that we are co-chairing. As former Pentagon leaders with experience working in Congress and industry, we assembled an array of former senior government leaders, high-tech industry executives, and national-security-focused investors who are passionate about tackling this critical challenge. We debated the issues and solutions and engaged more than 70 stakeholders from the White House, Congress, the Pentagon and the private sector. The immediate result is an interim report, unanimously supported by the Commissioners, that lays out ten recommendations to foster the rapid adoption of innovative technology by the Pentagon.
Two of our leading recommendations focus on shifting acquisition management from individual program management to a capability-portfolio model while also giving defense leaders more budget flexibility.
Today, the Pentagon acquires capabilities through more than 1,000 acquisition programs, each with their own requirements, budgets, and contracts. This drives long timelines, stove-piped solutions, and gross inefficiencies. Yet the operational environment today requires the U.S. military to adopt new technologies far more rapidly while dumping ones whose utility has been surpassed. This “need for speed” will only grow as our weapons and equipment become more software-centric.
We recommend that the Pentagon select five program executive officers, or PEOs, to operate a capability-portfolio model. This would more quickly move emerging technologies across the infamous Valley of Death. These PEOs could work with Congress to consolidate the smallest 20 percent of their budget accounts to enable greater funding flexibility within the portfolio to optimize investments. These PEOs could develop acquisition, contracting, and technical strategies for robust industry competition to produce capabilities that use common platforms and enterprise services to integrate solutions from many companies into a modular open-systems approach.
Another key set of recommendations from the Commission focuses on providing the Pentagon with additional budget flexibility in the year of execution. The sixty-year-old planning, programming, budgeting, and execution, or PPBE, system brings longer timelines and tighter constraints. This cumbersome process requires developing budgets with two- to-three-year lead times across 1,700 budget accounts. But in an environment with rapidly changing operations, evolving threats, new technologies, and fresh risks and opportunities, it is impossible to effectively budget or quickly pivot to meet tomorrow’s challenges at two-year intervals.
Congress is a vital partner for the Pentagon to enable greater speed and agility in acquisitions and technology development. The Commission recommends that the Pentagon and Congressional appropriators negotiate to consolidate hundreds of the smallest budget accounts, over 700 of which are under $20 million. Congress should change how it deals with Pentagon requests to move money to or from these smaller accounts. Currently, such moves must win approval from four committees. Instead, a request should start a countdown: if no lawmaker objects within 30 days, DOD should be able to move ahead. Lawmakers could also make it easier for the Pentagon to launch new efforts by raising the funding threshold that requires Congressional approval. Currently, such approval is required for new starts that will cost up to $20 million “for the entire effort.” Changing this language to “for the fiscal year” would allow PEOs to move ahead, while allowing lawmakers to veto efforts they disagree with.
As the defense industry, government labs, and new commercial industry partners identify emerging technology solutions, we cannot afford to wait for funding to build out critical defense capabilities. We urge Congress and the Pentagon to act on these recommendations to accelerate innovation adoption by the Defense Department and put the U.S. in a far better position to face off against China and Russia in the years to come.
Dr. Mark Esper was the 27th Secretary of Defense. Deborah Lee James was the 23rd Secretary of the Air Force. They are board directors of the Atlantic Council and co-chairs of its Commission on Defense Innovation Adoption.
Holly Joers, the program executive officer for the Program Executive Office, Defense Healthcare Management Systems (PEO-DHMS), said DoD’s Cerner-based EHR, is now live at 75% of DoD’s clinics and hospitals, with 160,000 users and 6.1 million beneficiaries in the system so far.
Joers said DoD’s experience has been that the deployment process works much, much better once it’s moved beyond the first few sites. After that, a lot of lessons have been learned, and the institution can start to converge around change management and IT deployment practices that make sense for the whole enterprise.
“I can’t comment specifically on VA, but when I look at where they are now, I’m taken back to where DoD was in the 2017-2018 timeframe,” she said. “There were challenges with the network, and so we made rules about what infrastructure had to be in place before a go-live, and how long it needed to be stable before we went live. We looked at our governance and management process to hear different inputs. When you’re only dealing with four sites, everyone wants to make it work for what their workflow was before. So you really have to have the fortitude to look at making an enterprise standard, knowing that it might not match what they’re currently doing today. And we had to go through those growing pains.”
In the interview Joers also discussed:
How PEO-DHMS is avoiding technical debt as it continues to develop MHS Genesis.
The integration of anonymized data from Genesis with other “secondary” data sources – including, for instance, Census data – to make it easier to answer bigger public health questions.
Workflow and process standardization and modernization
How they are using feedback from users and patients to improve MHS Genesis
Today, the Biden-Harris Administration released the United States Government’s National Standards Strategy for Critical and Emerging Technology (Strategy), which will strengthen both the United States’ foundation to safeguard American consumers’ technology and U.S. leadership and competitiveness in international standards development.
Standards are the guidelines used to ensure the technology Americans routinely rely on is universally safe and interoperable. This Strategy will renew the United States’ rules-based approach to standards development. It also will emphasize the Federal Government’s support for international standards for critical and emerging technologies (CETs), which will help accelerate standards efforts led by the private sector to facilitate global markets, contribute to interoperability, and promote U.S. competitiveness and innovation.
The Strategy focuses on four key objectives that will prioritize CET standards development:
Investment: Technological contributions that flow from research and development are the driving force behind new standards. The Strategy will bolster investment in pre-standardization research to promote innovation, cutting-edge science, and translational research to drive U.S. leadership in international standards development. The Administration is also calling on the private sector, universities, and research institutions to make long-term investments in standards development.
Participation: Private sector and academic innovation fuels effective standards development, which is why it’s imperative that the United States to work closely with industry and the research community to remain ahead of the curve. The U.S. Government will engage with a broad range of private sector, academic, and other key stakeholders, including foreign partners, to address gaps and bolster U.S. participation in CET standards development activities.
Workforce: The number of standards organizations has grown rapidly over the past decade, particularly with respect to CETs, but the U.S. standards workforce has not kept pace. The U.S. Government will invest in educating and training stakeholders — including academia, industry, small- and medium-sized companies, and members of civil society — to more effectively contribute to technical standards development.
Integrity and Inclusivity: It is essential for the United States to ensure the standards development process is technically sound, independent, and responsive to broadly shared market and societal needs. The U.S. Government will harness the support of like-minded allies and partners around the world to promote the integrity of the international standards system to ensure that international standards are established on the basis of technical merit through fair processes that will promote broad participation from countries across the world and build inclusive growth for all.
Putting the Strategy into Practice
The U.S. private sector leads standards activities globally, through standard development organizations (SDOs), to respond to market demand, with substantial contributions from the U.S. Government, academia, and civil society groups. The American National Standards Institute (ANSI) coordinates the U.S. private sector standards activities, while the National Institute of Standards and Technology (NIST) coordinates Federal Government engagement in standards activities. Industry associations, consortia, and other private sector groups work together within this system to develop standards to solve specific challenges. To date, this approach has fostered an effective and innovative standards system that has supercharged economic growth and worked for people of all nations.
The CHIPS and Science Act of 2022 (Pub. L. 117–167) provided $52.7 billion for American semiconductor research, development, manufacturing, and workforce development. The legislation also codifies NIST’s role in leading information exchange and coordination among Federal agencies and communication from the Federal Government to the U.S. private sector. This engagement, coupled with the CHIPS and Science Act’s investments in pre-standardization research, will drive U.S. influence and leadership in international standards development. NIST provides a portal with resources and standards information to government, academia, and the public; updates on the U.S. Government’s implementation efforts for the Strategy will also be posted to that portal.
The United States Government has already made significant commitments to leading and coordinating international efforts outlined in the Strategy. The United States has joined like-minded partners in the International Standards Cooperation Network, which serves as a mechanism to connect government stakeholders with international counterparts for inter-governmental cooperation. Additionally, the U.S.-EU Trade and Technology Council launched a Strategic Standardization Information mechanism to enable transatlantic information sharing.
Many U.S. Government agencies have already demonstrated their commitment to the Strategy through their actions and partnerships. Examples include:
The National Science Foundation has updated its proposal and award policies and procedures to incentivize participation in standards development activities.
The Department of State, NIST, the Department of Commerce, the Federal Communications Commission (FCC), the National Security Agency (NSA), the Office of the U.S. Trade Representative, USAID and other agencies engage in multilateral fora, such as the International Telecommunication Union, the Quad, the U.S.-EU Trade and Technology Council, the G7, and the Asia-Pacific Economic Cooperation, to share information on standards and CETs.
The National Telecommunications and Information Administration (NTIA) administers the Public Wireless Supply Chain Innovation Fund, a $1.5 billion grant program funded by the CHIPS and Science Act of 2022 that aims to catalyze the research, development, and adoption of open, interoperable, and standards-based networks.
The Department of Defense engages with ANSI and the private sector in collaborative standards activities such as Global Supply Chain Security for Microelectronics and the Additive Manufacturing Standards Roadmap, as well as with the Alliance for Telecommunications Industry Solutions and the 3rd Generation Partnership Project (3GPP).
The United States Agency for International Development and ANSI work together through a public-private partnership to support the capacity of developing countries in areas of standards development, conformity assessment, and private sector engagement.
The Environmental Protection Agency SmartWay program works closely with the International Organization for Standardization (ISO) to standardize greenhouse gas accounting for freight and passenger transportation, providing a global framework for credible, accurate calculation and evaluation of transportation-related climate pollutants.
NTIA, NIST, and the FCC coordinate U.S. Government participation in 3GPP and work with the Alliance for Telecommunications Industry Solutions to ensure participation by international standards delegates at North American-hosted 3GPP meetings.
The FCC’s newly established Office of International Affairs is managing efforts across the FCC to ensure expert participation in international standards activities, such as 3GPP and the Internet Engineering Task Force, in order to promote U.S. leadership in 5G and other next-generation technologies.
The Department of Transportation supports development of voluntary consensus technical standards via multiple cooperative efforts with U.S.-domiciled and international SDOs.
The U.S. Department of Energy (DOE), though partnerships with the private sector and the contributions of technical experts at DOE and its 17 National Laboratories, contributes to standards efforts in multiple areas ranging from hydrogen and energy storage to biotechnology and high-performance computing.
The Department of the Treasury’s Office of Financial Research leads and contributes to financial data standards development work for digital identity, digital assets, and distributed ledger technology in ISO and ANSI.
Assistant Professor, Journalism, Toronto Metropolitan University
Web3 is envisioned to be a “decentralized web ecosystem,” in which users can retain ownership of their data.
Its blockchain-based infrastructure would usher in the era of the “token economy”, say experts.
Web3 and Web 3.0 – in which a collection of websites would be linked together at the data level – are often mixed up, explains The Conversation.
The rapid growth of cryptocurrencies and virtual non-fungible tokens have dominated news headlines in recent years. But not many may see how these modish applications connect together in a wider idea being touted by some as the next iteration of the internet — Web3.
There are many misconceptions surrounding this buzzy (and, frankly, fuzzy) term, including the conflation of Web3 with Web 3.0. Here’s what you need to know about these terms.
What is Web3?
Since Web3 is still a developing movement, there’s no universal agreement among experts about its definition. Simply put, Web3 is envisioned to be a “decentralized web ecosystem,” empowering users to bypass internet gatekeepers and retain ownership of their data.
This would be done through blockchain; rather than relying on single servers and centralized databases, Web3 would run off of public ledgers where data is stored on computer networks that are chained together.
A decentralized Web3 would fundamentally change how the internet operates — financial institutions and tech companies would no longer need to be intermediaries of our online experiences.
“In a Web3 world, people control their own data and bounce around from social media to email to shopping using a single personalized account, creating a public record on the blockchain of all of that activity.”
Web3’s blockchain-based infrastructure would open up intriguing possibilities by ushering in the era of the “token economy.” The token economy would allow users to monetize their data by providing them with tokens for their online interactions. These tokens could offer users perks or benefits, including ownership stakes in content platforms or voting rights in online communities.
To better understand Web3, it helps to step back and see how the internet developed into what it is now.
Web 1.0: The ‘read-only’ web
Computer scientist Tim Berners-Lee is credited with inventing the world wide web in 1989, which allowed people to hyperlink static pages of information on websites accessible through internet browsers.
Berners-Lee was exploring more efficient ways for researchers at different institutions to share information. In 1991, he launched the world’s first website, which provided instructions on using the internet.
Web 3.0, unlike Web3, traces back to Tim Berners-Lee’s original vision for the internet as a collection of websites linking everything together at the data level. Image: REUTERS/Simon Dawson
These basic “read-only” websites were managed by webmasters who were responsible for updating users and managing the information. In 1992, there were 10 websites. By 1994, after the web entered the public domain, there were 3,000.
When Google arrived in 1996 there were two million. Last year, there were approximately 1.2 billion websites, although it is estimated only 17 per cent are still active.
Web 2.0: The social web
The next major shift for the internet saw it develop from a “read-only web” to where we are currently — a “read-write web.” Websites became more dynamic and interactive. People became mass participants in generating content through hosted services like Wikipedia, Blogger, Flickr and Tumblr.
Later on, social media platforms like Facebook, YouTube, Twitter and Instagram and the growth of mobile apps led to unparalleled connectivity, albeit through distinct platforms. These platforms are known as walled gardens because their parent companies heavily regulate what users are able to do and there is no information exchange between competing services.
Tech companies like Amazon, Google and Apple are deeply embedded into every facet of our lives, from how we store and pay for our content to the personal data we offer (sometimes without our knowledge) to use their wares.
Web3 vs. Web 3.0
This brings us to the next phase of the internet, in which many wish to wrest back control from the entities that have come to hegemonize it.
The terms Web3 and Web 3.0 are often used interchangeably, but they are different concepts.
Web3 is the move towards a decentralized internet built on blockchain. Web 3.0, on the other hand, traces back to Berners-Lee’s original vision for the internet as a collection of websites linking everything together at the data level.
Our current internet can be thought of as a gigantic document depot. Computers are capable of retrieving information for us when we ask them to, but they aren’t capable of understanding the deeper meaning behind our requests.
Information is also siloed into separate servers. Advances in programming, natural language processing, machine learning and artificial intelligence would allow computers to discern and process information in a more “human” way, leading to more efficient and effective content discovery, data sharing and analysis. This is known as the “semantic web” or the “read-write-execute” web.
In Berners-Lee’s Web 3.0 world, information would be stored in databases called Solid Pods, which would be owned by individual users. While this is a more centralized approach than Web3’s use of blockchain, it would allow data to be changed more quickly because it wouldn’t be distributed over multiple places.
Web3 and Web 3.0 are often mixed up because the next era of the internet will likely feature elements of both movements — semantic web applications, linked data and a blockchain economy. It’s not hard to see why there is significant investment happening in this space.
There are also critics who argue that Web3, in particular, is merely a contradictory rebranding of cryptocurrency that will not democratize the internet. While it’s clear we’ve arrived at the doorstep of a new internet era, it’s really anyone’s guess as to what happens when we walk through that door.
The Defense Department (DoD) today issued a proposed revision to the existing eligibility criteria for its voluntary Defense Industrial Base (DIB) Cybersecurity Program that, if enacted, would greatly expand the number of DIB companies that can participate in the program that shares cybersecurity threat intelligence and other security assistance to the private sector firms who do business with DoD.
“These revisions will allow a broader community of defense contractors to benefit from bilateral information sharing as when this proposed rule is finalized all defense contractors who are subject to mandatory cyber incident reporting will be able to participate,” DoD said in Federal Register noticefiled today.
The Pentagon is seeking public comment on the proposed revision by June 20. “DoD is also proposing changes to definitions and some technical corrections for readability,” the agency said.
The DIB Cybersecurity Program aims to improve the ability of companies to safeguard DoD information that resides on, or transits, DIB unclassified information systems. “The program encourages greater threat information sharing to complement mandatory aspects of DoD’s DIB cybersecurity activities which are contractually mandated” through Defense Federal Acquisition Regulation Supplement (DFARS) rules, according to DoD.
The program is part of a larger DoD effort to protect information handled by DIB companies “by understanding and sharing information, building security partnerships, implementing long-term risk management programs, and maximizing efficient use of resources,” DoD said in the Federal Register notice.
Speaking today at the AFCEA TechNet Cyber conference in Baltimore, Diedra Padgett, deputy director, Defense Industrial Base (DIB) Operations Directorate within the DoD CIO office, said the program now has more than 1,300 companies participating, and is continuing to grow.
The proposed revisions could attract thousands more participants.
In announcing the proposed revision to the program, Padgett said, “this is exciting, it’s out there for public review.”
“We do this to continue to move forward to reduce cyber risk and to bolster cybersecurity,” she said.
“This has been a long fought-battle for years in the making, and I’m glad to say that we’re getting there,” Padgett said.
DoD Mum on CMMC Status
Padgett said today she could not discuss any aspect of upcoming rules related to the agency’s Cybersecurity Maturity Model Certification (CMMC) requirements for DIB companies.
“DoD is unable to address any substantive aspects of the forthcoming CMMC 32 CFR rule or rule documents to include the potential policy and implementation related topics under the rulemaking process until it’s complete,” she said. “The Office of Management and Budget and the Information and Regulatory Affairs Office … is the authority on the timeline in the status of that rulemaking.”
As AI becomes more ubiquitous, the White House promised it would release guidelines for use by government agencies.
AI developers are also expected to agree to have their products reviewed at the upcoming DEF CON cybersecurity conference in August.
Funding for the proposed research hubs will come from the National Science Foundation and will bring the total number of AI research institutes to 25 across the country.
The White House announced it would invest $140 million to create seven artificial intelligence research hubs and released new guidance on AI.
The developments come ahead of Vice President Kamala Harris’ meeting with executives from Google’s parent company, Alphabet; Anthropic; Microsoft, and OpenAI on Thursday.
It’s part of the Biden administration’s aim to curtail security risks associated with AI as the technology rapidly develops and to impress on pioneering companies that they can help reduce harm early on. OpenAI is the creator of the widely used AI tool ChatGPT — bolstered by an investment from Microsoft. Anthropic is another leading startup.
In a statement after the meeting, Harris stressed the importance of this partnership moving forward.
“As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said. “And every company must comply with existing laws to protect the American people.”
As AI becomes more ubiquitous, the White House on Thursday promised it would release guidelines for use by government agencies. AI developers are also expected to agree to have their products reviewed at the upcoming DEF CON cybersecurity conference in August.
Funding for the proposed research hubs will come from the National Science Foundation and will bring the total number of AI research institutes to 25 across the country.
Artificial intelligence has already begun to disrupt everyday life with a deluge of fake images and videos and robot-penned text, prompting concerns ranging from national security to misinformation. The influence is being felt in American politics, as well: Republicans last week released an AI-generated video in response to President Joe Biden’s reelection bid.
Biden himself has said “it remains to be seen” if AI is dangerous, adding last month “it could be.”
“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” the president said in April ahead of a meeting with his Council of Advisors on Science and Technology.
Speaking to reporters after the meeting at the daily press briefing, White House press secretary Karine Jean-Pierre confirmed that the president has been “extensively briefed” on ChatGPT and said he sees both opportunity and risk in AI.
She said it was a “frank” discussion that included topics such as “the need for companies to be more transparent with policymakers, the public and others about their AI systems, in particular the importance of being able to evaluate, verify and validate the safety, security and the efficacy system; and the need to ensure AI systems are secure from malicious actors and attacks,” Jean-Pierre said.
“You have four CEOs here meeting with the president and the vice president, that shows how seriously we take it,” she said.
The White House has made addressing AI a priority. Last year the administration released a “Blueprint for an AI Bill of Rights” and later outlined the creation of a National AI Research Resource.
In February Biden signed an executive order aimed to prevent bias and discrimination in the technologies from their inception.