While the defense industrial base is healthy, there are single points of failure and dependencies on overseas suppliers that must be addressed, the undersecretary of defense for acquisition and sustainment said.
“Over a period of years, we have offshored many, many sources of supply,” Ellen M. Lord said during an online discussion Thursday with the Hudson Institute. “It’s not for one reason; it’s for a variety of reasons, whether it be regulations, whether it be labor costs, whether it be government support of different industries.”
The deindustrialization of the U.S. over the last 50 years, the end of the Cold War and the focus it gave the U.S. on defeating the Soviet Union, digital technology and the rise of China have all created challenges to national defense.
In the newly released Fiscal Year 2020 Industrial Capabilities Report to Congress, Lord said the department looked into those challenges and their effects on the defense industrial base and proposed key actions to address them.
“What we did in this report was try to really capture those risks, look at the opportunities and come up with some specific steps that we can really take to reform how we go about looking at that supply chain and, in the endgame, really get capability downrange to the warfighter as quickly and cost-effectively as possible,” she said.
First, Lord said, the U.S. must re-shore more of its industrial base — bring it back to the U.S. and U.S. allies.
“There are a couple [of] key areas there with shipbuilding, as well as microelectronics — fundamental to our capability,” she said.
Development of a modern manufacturing and engineering workforce along with a more robust research and development base is also critical. Declines in U.S. science, technology, engineering and mathematics education and industrial jobs hurt the ability of the defense industrial base to innovate, Lord said.
“We want to make sure that we have modern manufacturing and engineering expertise,” she said. “We do not have nearly the number of scientists and engineers as China has. We need to make sure that we develop our talent to be able to leverage on these critical areas.”
The department must also reform and modernize the defense acquisition process to better meet the realities of the 21st century, Lord said.
“We’ve started with a number of those, but there’s much further to go,” she said. “We want to make sure that our traditional defense industrial base is widened to get all of those creative, innovative companies. We know the small companies are where most of our innovation comes from, and the barriers to entry — sometimes to getting into the Department of Defense — are rather onerous.”
Lord said part of modernizing and reforming defense acquisition is the recently announced Trusted Capital Marketplace, which will match potential defense suppliers — many of them small companies that have never done business with DOD — with the investors they need to keep operating and innovating. The Trusted Capital Marketplace will vet investors to ensure foreign ownership, control and influence is nonexistent.
Finally, Lord said, the department must find new ways to partner private sector innovation with public sector resources and demand.
“We, as the government, I believe, need to work with industry to make sure that we diversify that industrial base and, also, that we much more quickly translate technological capability into features of current platforms and weapon systems, as well as incorporate it in new ones,” Lord said.
After Amazon’s three-week re:Invent conference, companies building AI applications may have the impression that AWS is the only game in town. Amazon announced improvements to SageMaker, its machine learning (ML) workflow service, and to Edge Manager — improving AWS’ ML capabilities on the edge at a time when serving the edge is considered increasingly critical for enterprises. Moreover, the company touted big customers like Lyft and Intuit.
But Mohammed Farooq believes there is a better alternative to the Amazon hegemon: an open AI platform that doesn’t have any hooks back to the Amazon cloud. Until earlier this year, Farooq led IBM’s Hybrid multi-cloud strategy, but he recently left to join the enterprise AI company Hypergiant.Ad: (2:07)Skip AdMicrosoft says hackers viewed source code, didn’t change it, and other top stories in technology from January 05, 2021.
Here is our Q&A with Farooq, who is Hypergiant’s chair, global chief technology officer, and general manager of products. He has skin in the game and makes an interesting argument for open AI.
VentureBeat: With Amazon’s momentum, isn’t it game over for any other company hoping to be a significant service provider of AI services, or at the least for any competitor not named Google or Microsoft?
Mohammed Farooq: On the one hand, for the last three to five-plus years, AWS has delivered outstanding capabilities with SageMaker (Autopilot, Data Wrangler) to enable accessible analytics and ML pipelines for technical and nontechnical users. Enterprises have built strong-performing AI models with these AWS capabilities.
On the other hand, the enterprise production throughput of performing AI models is very low. The low throughput is a result of the complexity of deployment and operations management of AI models within consuming production applications that are running on AWS and other cloud/datacenter and software platforms.
Enterprises have not established an operations management system — something referred to within the industry as ModelOps. ModelOps are required and should have things like lifecycle processes, best practices, and business management controls. These are necessary to evolve the AI models and data changes in the context of the underlying heterogeneous software and infrastructure stacks currently in operation.
AWS does a solid job of automating an AI ModelOps process within the AWS ecosystem. However, running enterprise ModelOps, as well as DevOps and DataOps, will need not only AWS, but multiple other cloud, network, and edge architectures. AWS is great as far as it goes, but what is required is seamless integration with enterprise ModelOps, hybrid/multi-cloud infrastructure architecture, and IT operations management system.
Failures in experimentation are the result of average time needed to create a model. Today, successful AI models that deliver value and that business leaders trust take 6-12 months to build. According to the Deloitte MLOps Industrialized AI Report (released in December 2020), an average AI team can build and deploy, at best, two AI models in a year. At this rate, industrializing and scaling AI in the enterprise will be a challenge. An enterprise ModelOps process integrated with the rest of enterprise IT is required to speed up and scale AI solutions in the enterprise.
I would argue that we are on the precipice of a new era in artificial intelligence — one where AI will not only predict but recommend and take autonomous actions. But machines are still taking actions based on AI models that are poorly experimented with and fail to meet defined business goals (key performance indicators).
VentureBeat: So what is it that holds the industry back? Or asked a different way, what is that holds Amazon back from doing this?
Farooq: To improve development and performance of AI models, I believe we must address three challenges that are slowing down the AI model development, deployment, and production management in the enterprise. Amazon and other big players haven’t been able to address these challenges yet. They are:
AI data: This is where everything starts and ends in performant AI models. Microsoft [Azure] Purview is a direct attempt to solve the data problems of the enterprise data governance umbrella. This will provide AI solution teams (consumers) valuable and trustworthy data.
AI operations processes: These are enabled for development and deployment in the cloud (AWS) and do not extend or connect to the enterprise DevOps, DataOps, and ITOps processes. AIOps processes to deploy, operate, manage, and govern need to be automated and integrated into enterprise IT processes. This will industrialize AI in the enterprise. It took DevOps 10 years to establish CI/CD processes and automation platforms. AI needs to leverage the assets in CI/CD and overlay the AI model lifecycle management on top of it.
AI architecture: Enterprises with native cloud and containers are accelerating on the path to hybrid and multi-cloud architectures. With edge adoption, we are moving to pure distributed architecture, which will connect the cloud and edge ecosystem. AI architecture will have to operate on distributed architectures across hybrid and multi-cloud infrastructure and data environments. AWS, Azure, Google, and VMWare are effectively moving towards that paradigm.
To develop the next phase of AI, which I am calling “industrialized AI in the enterprise,” we need to address all of these. They can only be met with an open AI platform that has an integrated operations management system.
VentureBeat: Explain what you mean by an “open“ AI platform.
Farooq: An open AI platform for ModelOps lets enterprise AI teams mix and match required AI stacks, data services, AI tools, and domain AI models for different providers. Doing so will result in powerful business solutions at speed and scale.
AWS, with all of its powerful cloud, AI, and edge offerings, has still not stitched together a ModelOps that can industrialize AI and cloud. Enterprises today are using a combination of ServiceNow, legacy systems management, DevOps tooling, and containers to bring this together. AI operations adds another layer of complexity to an already increasingly complex model.
An enterprise AI operations management system should be the master control point and system of record, intelligence, and security for all AI solutions in a federated model (AI models and data catalogs). AWS, Azure, or Google can provide data, process, and tech platforms and services to be consumed by enterprises.
But lock-in models, like those currently being offered, harm enterprise’s ability to develop core AI capabilities. Companies like Microsoft, Amazon, and Google are hampering our ability to build high-caliber solutions by constructing moats around their products and services. The path to the best technology solutions, in the service of both AI providers and consumers, is one where choice and openness is prized as a pathway to innovation.
You have seen companies articulate a prominent vision for the future of AI. But I believe they are limited because they are not going far enough to democratize AI access and usage with the current enterprise IT Ops and governance process. To move forward, we need an enterprise ModelOps process and an open AI services integration platform that industrializes AI development, deployment, operations, and governance.
Without these, enterprises will be forced to choose vertical solutions that fail to integrate with enterprise data technology architectures and IT operations management systems.
VentureBeat: Has anyone tried to build this open AI platform?
Farooq: Not really. To manage AI ModelOps, we need a more open and connected AI services ecosystem, and to get there, we need an AI services integration platform. This essentially means that we need cloud provider operations management integrated with enterprise AI operations processes and a reference architecture framework (led by CTO and IT operations).
There are two options for enterprise CIOs, CTOs, CEOs, and architects. One is vertical, and the other one is horizontal.
Dataiku, Databricks, Snowflake, C3.AI, Palantir, and many others are building these horizontal AI stack options for the enterprise. Their solutions operate on top of AWS, Google, and Azure AI. It’s a great start. However, C3.AI and Palantir are also moving towards lock-in options by using model-driven architectures.
VentureBeat: So how is the vision of what you’re building at Hypergiant different to these efforts?
Farooq: The choice is clear: We have to enable an enterprise AI stack, ModelOps tooling, and governance capabilities enabled by an open AI services integration platform. This will integrate and operate customer ModelOps and governance processes internally that can work for each business unit and AI project.
What we need is not another AI company, but rather an AI services integrator and operator layer that improves how these companies work together for enterprise business goals.
A customer should be able to use Azure solutions, MongoDB, and Amazon Aurora, depending on what best suits their needs, price points, and future agenda. What this requires is a mesh layer for AI solution providers.
VentureBeat: Can you further define this “mesh layer”? Your figure shows it is a horizontal layer, but how does it work in practice? Is it as simple as plugging in your AI solution on top, and then having access to any cloud data source underneath? And does it have to be owned by a single company? Can it be open-sourced, or somehow shared, or at least competitive?
Farooq: The data mesh layer is the core component, not only for executing the ModelOps processes across cloud, edge, and 5G, but it is also a core architectural component for building, operating, and managing autonomous distributed applications.
Currently we have cloud data lakes and data pipelines (batch or steaming) as an input to build and train AI models. However, in production, data needs to be dynamically orchestrated across datacenters, cloud, 5G, and edge end points. This will ensure that the AI models and the consuming apps at all times have the required data feeds in production to execute.
AI/cloud developers and ModelOps teams should have access to data orchestration rules and policy APIs as a single interface to design, build, and operate AI solutions across distributed environments. This API should hide the complexity of the underlying distributed environments (i.e., cloud, 5G, or edge).
In addition, we need packaging and container specs that will help DevOps and ModelOps professionals use the portability of Kubernetes to quickly deploy and operate AI solutions at scale.
These data mesh APIs and packaging technologies need to be open sourced to ensure that we establish an open AI and cloud stack architecture for enterprises and not walled gardens from big providers.
By analogy, look at what Twilio has done for communications: Twilio strengthened customer relationships across businesses by integrating many technologies in one easy-to-manage interface. Examples in other industries include HubSpot in marketing and Squarespace for website development. These companies work by providing infrastructure that simplifies the experience of the user across the tools of many different companies.
VentureBeat: When are you launching this?
Farooq: We are planning to launch a beta version of a first step of that roadmap early next year [Q1/2020].
VentureBeat: AWS has a reseller policy. Could it could crack down on any mesh layer if they wanted to?
Farooq: AWS could build and offer their own mesh layer that is tied to its cloud and that interfaces with 5G and edge platforms of its partners. But this will not help its enterprise customers accelerate the development, deployment, and management of AI and hybrid/multi-cloud solutions at speed and scale. However, collaborating with the other cloud and ISV providers, as it has done with Kubernetes (CNCF-led open source project), will benefit AWS significantly.
As further innovation on centralized cloud computing models have stalled (based on current functionality and incremental releases across AWS, Azure, and Google), the data mesh and edge native architectures is where innovation will need to happen, and a distributed (declarative and runtime) data mesh architecture is a great place for AWS to contribute and lead the industry.
The digital enterprise will be the biggest beneficiary of a distributed data mesh architecture, and this will help industrialize AI and digital platforms faster — thereby creating new economic opportunities and in return more spend on AWS and other cloud provider technologies.
VentureBeat: What impact would such a mesh-layer solution have on the leading cloud companies? I imagine it could influence user decisions on what underlying services to use. Could that middle mesh player reduce pricing for certain bundles, undercutting marketing efforts by the cloud players themselves?
Farooq: The data mesh layer will trigger massive innovation on the edge and 5G native (not cloud native) applications, middleware, and infra-architectures. This will drive the large providers to rethink their product roadmaps, architecture patterns, go-to-market offerings, partnerships, and investments.
VentureBeat: If the cloud companies see this coming, do you think they’ll be more inclined to move toward an open ecosystem more rapidly and squelch you?
Farooq: The big providers in a first or second cycle of evolution of a technology or business model will always want to build a moat and lock in enterprise clients. For example, AWS never accepted that hybrid or multi-cloud was needed. But in the second cycle of cloud adoption by VMWare clients, VMWare started to preach an enterprise-outward hybrid cloud strategy connecting to AWS, Azure, and Google.
This led AWS to launch a private cloud offering (called Outposts), which is a replica for the AWS footprint on a dedicated hardware stack that has the same offerings. AWS executes its API across AWS public and Outposts. In short, they came around.
The same will happen to edge, 5G, and distributed computing. Right now, AWS, Google, and Azure are building their distributed computing platforms. However, the power of the open source community and the innovation speed is so great, the distributed computing architecture in the next cycle and beyond will have to move to an open ecosystem.
VentureBeat: What about lock-in at the mesh-layer level? If I choose to go with Hypergiant so I can access services across clouds, and then a competing mesh player emerges that offers better prices, how easy is it to move?
Farooq: We at Hypergiant believe in an open ecosystem, and our go-to-market business model depends on being at the intersection of enterprise consumption and provider offerings. We drive consumption economics, not provider economics. This will require us to support multiple data mesh technologies and create a fabric for interoperation with a single interface to our clients. The final goal is to ensure an open ecosystem, developer, and operator ease, and value to enterprise clients so that they are able to accelerate their business and revenue strategies by leveraging the best value and the best breed of technologies. We are looking at this from the point of view of the benefits to the enterprise, not the provider.
VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you,
our newsletters
gated thought-leader content and discounted access to our prized events, such as Transform
Maj. Chuck Suslowicz , Jan Kallberg , and LTC Todd Arnold
The SolarWinds breach points out the importance of having both offensive and defensive cyber force experience.
The breach is an ongoing investigation, and we will not comment on the investigation. Still, in general terms, we want to point out the exploitable weaknesses in creating two silos — OCO and DCO.
The separation of OCO and DCO, through the specialization of formations and leadership, undermines broader understanding and value of threat intelligence. The growing demarcation between OCO and DCO also have operative and tactical implications. The Multi-Domain Operations (MDO) concept emphasizes the competitive advantages that the Army — and greater Department of Defense — can bring to bear by leveraging the unique and complementary capabilities of each service.
It requires that leaders understand the capabilities their organization can bring to bear in order to achieve the maximum effect from the available resources. Cyber leaders must have exposure to a depth and the breadth of their chosen domain to contribute to MDO.
Unfortunately, within the Army’s operational cyber forces, there is a tendency to designate officers as either offensive cyber operations (OCO) or defensive cyber operations (DCO) specialists. The shortsighted nature of this categorization is detrimental to the Army’s efforts in cyberspace and stymies the development of the cyber force, affecting all soldiers.
The Army will suffer in its planning and ability to operationally contribute to MDO from a siloed officer corps unexposed to the domain’s inherent flexibility.
We consider the assumption that there is a distinction between OCO and DCO to be flawed. It perpetuates the idea that the two operational types are doing unrelated tasks with different tools, and that experience in one will not improve performance in the other. We do not see such a rigid distinction between OCO and DCO competencies. In fact, most concepts within the cyber domain apply directly to both types of operations.
The argument that OCO and DCO share competencies is not new; the iconic cybersecurity expert Dan Geer first pointed out that cyber tools are dual-use nearly two decades ago, and continues to do so. A tool that is valuable to a network defender can prove equally valuable during an offensive operation, and vice versa.
For example, a tool that maps a network’s topology is critical for the network owner’s situational awareness. The tool could also be effective for an attacker to maintain situational awareness of a target network. The dual-use nature of cyber tools requires cyber leaders to recognize both sides of their utility.
So, a tool that does a beneficial job of visualizing key terrain to defend will create a high-quality roadmap for a devastating attack. Limiting officer experiences to only one side of cyberspace operations (CO) will limit their vision, handicap their input as future leaders, and risk squandering effective use of the cyber domain in MDO.
An argument will be made that “deep expertise is necessary for success” and that officers should be chosen for positions based on their previous exposure. This argument fails on two fronts. First, the Army’s decades of experience in officers’ development have shown the value of diverse exposure in officer assignments. Other branches already ensure officers experience a breadth of assignments to prepare them for senior leadership.
Second, this argument ignores the reality of “challenging technical tasks” within the cyber domain. As cyber tasks grow more technically challenging, the tools become more common between OCO and DCO, not less common. For example, two of the most technically challenging tasks, reverse engineering of malware (DCO) and development of exploits (OCO), use virtually identical toolkits.
An identical argument can be made for network defenders preventing adversarial access and offensive operators seeking to gain access to adversary networks. Ultimately, the types of operations differ in their intent and approach, but significant overlap exists within their technical skillsets.
Experience within one fragment of the domain directly translates to the other and provides insight into an adversary’s decision-making processes. This combined experience provides critical knowledge for leaders, and lack of experience will undercut the Army’s ability to execute MDO effectively. Defenders with OCO experience will be better equipped to identify an adversary’s most likely and most devastating courses of action within the domain. Similarly, OCO planned by leaders with DCO experience are more likely to succeed as the planners are better prepared to account for potential adversary countermeasures.
In both cases, the cross-pollination of experience improves the Army’s ability to leverage the cyber domain and improve its effectiveness. Single tracked officers may initially be easier to integrate or better able to contribute on day one of an assignment. However, single-tracked officers will ultimately bring far less to the table than officers experienced in both sides of the domain due to the multifaceted cyber environment in MDO.
Maj. Chuck Suslowicz is a research scientist in the Army Cyber Institute at West Point and an instructor in the U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS). Dr. Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. LTC Todd Arnold is a research scientist in the Army Cyber Institute at West Point and assistant professor in U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS.) The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Department of Defense.