Why It MattersArtificial intelligence is everywhere. But it’s humble leadership that leads our list.Share
What ideas and insights were people drawn to this year? The following list outlines the most-read articles from MIT Sloan’s Ideas Made to Matter team to date in 2023.
MIT Sloan professor emeritus Edgar Schein, a social psychologist who practiced his own tenets of humble leadership and humble inquiry, died at age 94. In his memory, we shared five of his most enduring management ideas.
In a new book, MIT professor Yossi Sheffi examines supply chain complexity, artificial intelligence, and the future of work. One case study? The incredible journey of the ordinary banana.
Data-literate leaders understand data well enough to make their best decisions, drive literacy throughout their organizations, and create a culture of trust in data.
Generative AI can boost worker productivity, but organizations must first establish a culture of accountability, reward peer training, and encourage role reconfiguration.
A new book from MIT Sloan’s Steven Spear provides leaders with a blueprint for designing, sustaining, and improving their organization’s sociotechnical systems.
If leaders want to better understand their organization’s performance, they should look to their employees and how they do their work.
Organizations succeed when they design their processes, routines, and procedures to encourage employees to problem-solve and contribute to a common purpose, write MIT Sloan senior lecturer Steven Spear and Gene Kimin their new book, “Wiring the Winning Organization.”
“When people have difficulty doing their work easily and well, despite investing their best time and energy to support the larger effort, we shouldn’t expect the enterprise as a whole to perform well either,” Spear and Kim write. “This is an organization that has not been wired to win.”
In this excerpt from their book, they outline the three collaborative layers of an organization and suggest three mechanisms leaders can engage to hone employees’ problem-solving skills.
The excerpt has been edited and condensed for clarity and length.
++++++
All organizations are sociotechnical systems — people working with other people, engaging (sometimes complex) technology to accomplish what they are collaborating on. Regardless of domain, collaborative problem-solving occurs on three distinct layers, where people focus their attention and express their experience, training, and creativity:
Layer 1 contains the technical, scientific, and engineered objects that people are trying to study, create, or manipulate. These may be molecules in drug development, code in software development, physical parts in manufacturing, or patient injuries or illnesses in medical care. For people in Layer 1, their expertise is around these technical objects (i.e., their structure and behavior), and their work is expressed through designing, analyzing, fabricating, fixing, repairing, transforming, creating, and so forth.
Layer 2 contains the scientific, technical, or engineered tools and instrumentation through which people work on Layer 1 objects. These may be the devices that synthesize medicinal compounds in drug development, the development tools and operational platforms in software development, technologies that transform materials in manufacturing, or the technologies to diagnose and treat patients’ illnesses and injuries. Layer 2 capabilities include the operation, maintenance, and improvement of these tools and instruments. These first two layers are the “technical” part of a sociotechnical system.
Layer 3 contains the social circuitry. This is the overlay of processes, procedures, norms, and routines — the means by which individual efforts are expressed and integrated through collaboration toward a common purpose. This is the “socio” part of a sociotechnical system.
Danger zones and winning zones for solving really difficult problems
Leaders manage the social circuitry (Layer 3) that determines whether their organizations get dismal or great outcomes. How this circuitry is designed and operated dictates the conditions in which people can solve difficult problems, continually generate great and new ideas, and put them into impactful practice. Certain conditions make it more difficult to solve problems or generate new and useful ideas. We call that the danger zone. Other conditions make getting good answers easier. We call that the winning zone.
In the danger zone, problems are complex, with many factors affecting the system at once, and their relationships are highly intertwined. Hazards are many and severe, risks of failure are high, and costs of failure can be catastrophic. Systems in the danger zoneare difficult to control, and there are limited, if any, opportunities to repeat experiences, so feedback-based learning is difficult if not outright impossible.
In contrast, leaders enable much more advantageous conditions in the winning zone. Problems have been reframed so they are simpler to address. The hazards and risks have been reduced so failures are less costly, especially during design, development, testing, and practice. Problem-solving has been shifted into slower-moving situations, where the pace of experiences can be better controlled. Opportunities to learn by experience or experimentation are increased to allow more iteration. And finally, there is much more clarity about where and when to focus problem-solving efforts, because it is obvious when problems are occurring, so attention is given to containing and solving them.
When we leave ourselves and our colleagues in the danger zone, it becomes extremely difficult to develop and design products and services and to develop and operate systems through which we collaborate and by which we coordinate. In fact, in such conditions, given the complexity and pace of the environment, it’s often difficult to even recognize that significant problems are occurring and that they must be addressed to avert disaster.
In contrast, when we change our experiences so they happen in the winning zone, generating good answers to difficult problems is much easier, because people are better able to put their capabilities to best use. We can move ourselves from the danger zoneto the winning zoneusing the three mechanisms of slowification, simplification, and amplification.
Let’s take a closer look at defining each of these mechanisms:
Slowification makes it easier to solve problems by pulling problem-solving out of the fast-paced and often unforgiving realm of performance (i.e., operations or execution). This shifting of Layer 3 problem-solving into planning and practice allows people to engage in deliberative, reflective experientially and experimentally informed reasoning rather than having to constantly react with whatever habits, routines, and legacy approaches have already been ingrained.
Simplification makes the problems themselves easier to solve by reshaping them. Large problems are deliberately broken down into smaller, simpler ones through a combination of three techniques: incrementalization, modularization, and linearization. By doing so, we partition complex problems with many interacting factors into many smaller problems. These problems have fewer interacting factors, making them easier to solve. Furthermore, Layer 1 (technical object) problem-solving can be done in parallel, with less need for Layer 3 coordination, increasing independence of action.
Amplification makes it obvious there are problems and makes it clear whether those problems have been seen and solved. Mechanisms are built into Layer 3 (social circuitry) to amplify that little things are amiss, drawing attention to them early and often. This focuses attention on containing and resolving small and local glitches before they have a chance to become large and systemically disruptive.
Ideally, an organization will have the latitude to do all three: slow things down to make problem-solving easier, partition big problems into smaller ones that are simpler to solve, and amplify problems so they’re addressed sooner and more often. Even if we cannot do all three, doing two or even one still brings us closer to the winning zone, making it easier for us to take situations about which we know too little and can do too little and convert them into situations in which we know enough and can do enough.
DoDIIS 2023 — Modernizing the Joint Worldwide Intelligence Communication System, the government’s network for hosting top secret and sensitive compartmented information, is leading the priorities of the Defense Intelligence Agency’s chief information officer over the next year.
Speaking at the Department of Defense Intelligence Information System, or DoDIIS, conference, DIA CIO Doug Cossa said that JWICS, a system that dates back to the Gulf War and is used by the agency to store confidential intelligence, was developed during a time where DIA was challenged to figure out a way to transmit secure voice and video to the Pentagon.
“Flashforward where we are today and how that system has evolved, we have over a million users that depend on that for transmitting top secret information,” Cossa said.
Cossa has been vocal about the need for JWICS modernization for years. In 2021, he said DIA was investing significantly in the system and was focused on updating equipment, building out cyber security tools and optimizing use cases for JWICS.
Most recently, the spotlight was put on JWICS when 21-year-old Massachusetts Air National Guard cyber transport systems apprentice Jack Teixeira allegedly leakedhundreds of classified documents on social media platform Discord. On Monday, the Air Force released findings of its investigation into Texiera’s unauthorized disclosure of the documents and found, in part, that his access to JWICS “enabled him to view intelligence content and analysis that reside on” classified systems.
IT Modernization And Connectivity
Following JWICS modernization, Cossa said his other priorities included improving DIA’s information technology workforce and modernizing DoD intelligence information systems, an area that DIA “divested significantly from” in 2013.
He said that DIA’s shared desktop environment, which the agency has worked on with the National Geospatial Agency, now has more than 70,000 users, which “aides in the integration in how we share intelligence.” Next on his priorities is increasing international connectivity.
“If there’s anything that the recent crises that I’ve seen as CIO during my tenure here — whether it be the Afghanistan retrograde, Russia-Ukraine crisis, and now the Israel-Hamas conflict — all depended and still depend on our interactions and our intelligence sharing with international partners,” Cossa said. “COVID obviously was a significant shock to our system. … That really knocked us off course and focused us to recalibrate how we work day in and day out.”
Part of that adjustment included transitioning DIA’s software development and IT capabilities to the “unclassified fabric,” he added, which he referred to as the capability delivery pipeline, another priority area for his office in the coming year. Cossa described the unclassified pipeline as a place where DIA can deliver software onto secret and top secret networks.
“As we think about where we are in the world today… we’re wondering, how are we going to compete with countries like China that are now fighting for world dominance?” he said. “We are living in more chaotic times than ever with crises around the world and we realize that everything is at stake to include our national security. As we think about the future, we are, as technologists, on the frontline to respond.”
After a year of massive cuts, the tech job market is so unstable that the US government has come to be seen as an appealing, innovative employer.
Tech companies have laid off some 400,000 people worldwide in 2022 and 2023, according to Layoffs.fyi, a site that tracks tech industry job losses. With the market yet to right itself, and some people reexamining the role big tech firms play in society, public sector roles, complete with perks like pensions and a warm, fuzzy do-good feeling, are suddenly proving popular.
“This is a great nexus point where the need and capacity is out there,” says Keith Wilson, the talent engagement manager with US Digital Response, a nonprofit that helps governments with digital expertise. “We’re trying to help these state and local governments learn how to hire better for technical roles.”
Case in point: The US Department of Veterans Affairs, which has hired 1,068 people into tech jobs over the past year, meeting its hiring goal, says Nathan Tierney, chief people officer for the department. To do so, the agency adjusted pay to narrow the gap between government and private sector roles, resulting in an average salary increase of $18,000—and nearly all workers across the department got raises.
It also reworked its application and recruiting strategies; rather than wait for workers to come to the hiring website, it went to find them at LinkedIn Live events and conferences. The department also advertises remote roles, and it is setting up hubs for workers in cities where tech workers congregate, like Seattle, Austin, and Charlotte. “I want to hire highly skilled folks,” Tierney says. “We have an opportunity to capitalize on that.”
There’s a lot of work to do. Red tape and slow processes shroud government work. And keeping pace with the private sector, where hiring strategies and salaries move fast, has traditionally been hard for governments. Then, once hired, those employees may face similar roadblocks when it comes to innovating in their jobs. Still, there’s movement by local and federal US government branches to bring in new talent
In 2021, US president Joe Biden signed a $1 trillion infrastructure law. It included $1 billion in cybersecurity grants for state and local governments, along with additional money for federal agencies to spend on cybersecurity. This influx of cash comes as the tech sector slumps
And interest in government jobs among tech workers remains strong. In late October, more than 3,000 people registered for a Tech to Gov career event, held by the Tech Talent Project, a nonprofit that helps the US government recruit for tech roles. One thousand more had signed up for a waiting list.
“It’s not just layoffs—what I have definitely seen is folks pausing in the tech sector,” says Jennifer Anastasoff, executive director at Tech Talent Project. “This has been a moment where folks have started pausing and started thinking about where they can make the most difference.”
A federal tech job portal had 107 openings as of mid-November. The salaries range from around $40,000 to nearly $240,00. The Office of Personnel Management, the human resources arm of the federal government, made a pitch to laid-off tech workers earlier this year, hoping to scoop up some 22,000 people into public sector tech roles. That office did not respond to emails seeking updates on the hiring process for tech jobs. But smaller government agencies around the country have made strides in luring high-profile private sector workers.
New York recently hired a former high-ranking employee from Blue Cross Blue Shield of Massachusetts to serve as the state’s first chief customer experience officer. Shelby Switzer took a job as the director of Baltimore’s new Digital Services Team earlier this year. Three new employees were hired underneath Switzer—all from the private sector. The group’s first project was to modernize permitting; instead of going to several offices in person to obtain permits for events and street closures, people can now apply online. It seems simple, but for the local government, that’s a huge deal.
One of those benefits came in hiring a UX designer, says Switzer. “Having somebody who is the expert in thinking about the usability of services in technology is just totally new.” But working in government can mean one tech team is trying to innovate while stuck in a bigger, slow-moving pool. “There is a ton of organizational inertia,” Switzer says. “Government wasn’t really designed to be efficient.”
These kinds of small changes are hard to come by in government, but there’s a trend to more cities and states making investments in tech infrastructure. In early November, in Pennsylvania, the Commonwealth Office of Digital Experience, or CODE PA, launched a system that lets residents, businesses, charities, and schools look up if they are eligible for a refund after paying for a permit, license, or certification, and then request a refund.
Pennsylvania is investing big in tech and AI under Josh Shapiro, its new governor. It hired Amaya Capellán, who moved from Comcast to the Pennsylvania government this year, trading corporate life for the role of Pennsylvania’s chief information officer. Some initial priorities for Capellán include finding ways for governments to use generative AI and updating permitting and licensing.
Capellán says people may be realizing that tech companies are treating them as replaceable, pushing them to reconsider roles in tech. “It’s really inspiring to think about the kind of ways you can affect people’s lives for good.”
For the first time, a team of Princeton physicists have been able to link together individual molecules into special states that are quantum mechanically “entangled.” In these bizarre states, the molecules remain correlated with each other—and can interact simultaneously—even if they are miles apart, or indeed, even if they occupy opposite ends of the universe. This research was recently published in the journal Science.
“This is a breakthrough in the world of molecules because of the fundamental importance of quantum entanglement,” said Lawrence Cheuk, assistant professor of physics at Princeton University and the senior author of the paper. “But it is also a breakthrough for practical applications because entangled molecules can be the building blocks for many future applications.”
These include, for example, quantum computers that can solve certain problems much faster than conventional computers, quantum simulators that can model complex materials whose behaviors are difficult to model, and quantum sensors that can measure faster than their traditional counterparts.
“One of the motivations in doing quantum science is that in the practical world, it turns out that if you harness the laws of quantum mechanics, you can do a lot better in many areas,” said Connor Holland, a graduate student in the physics department and a co-author on the work.
The ability of quantum devices to outperform classical ones is known as “quantum advantage.” And at the core of quantum advantage are the principles of superposition and quantum entanglement. While a classical computer bit can assume the value of either 0 or 1, quantum bits, called qubits, can simultaneously be in a superposition of 0 and 1.
The latter concept, entanglement, is a major cornerstone of quantum mechanics and occurs when two particles become inextricably linked with each other so that this link persists, even if one particle is light years away from the other particle. It is the phenomenon that Albert Einstein, who at first questioned its validity, described as “spooky action at a distance.”
Since then, physicists have demonstrated that entanglement is, in fact, an accurate description of the physical world and how reality is structured.
“Quantum entanglement is a fundamental concept,” said Cheuk, “but it is also the key ingredient that bestows quantum advantage.”
But building quantum advantage and achieving controllable quantum entanglement remains a challenge, not least because engineers and scientists are still unclear about which physical platform is best for creating qubits.
In the past decades, many different technologies—such as trapped ions, photons, and superconducting circuits, to name only a few—have been explored as candidates for quantum computers and devices. The optimal quantum system or qubit platform could very well depend on the specific application.
Until this experiment, however, molecules had long defied controllable quantum entanglement. But Cheuk and his colleagues found a way, through careful manipulation in the laboratory, to control individual molecules and coax them into these interlocking quantum states.
They also believed that molecules have certain advantages—over atoms, for example—that made them especially well-suited for certain applications in quantum information processing and quantum simulation of complex materials. Compared to atoms, for example, molecules have more quantum degrees of freedom and can interact in new ways.
“What this means, in practical terms, is that there are new ways of storing and processing quantum information,” said Yukai Lu, a graduate student in electrical and computer engineering and a co-author of the paper. “For example, a molecule can vibrate and rotate in multiple modes. So, you can use two of these modes to encode a qubit. If the molecular species is polar, two molecules can interact even when spatially separated.”
Nonetheless, molecules have proven notoriously difficult to control in the laboratory because of their complexity. The very degrees of freedom that make them attractive also make them hard to control or corral in laboratory settings.
Cheuk and his team addressed many of these challenges through a carefully thought-out experiment. They first picked a molecular species that is both polar and can be cooled with lasers. They then laser-cooled the molecules to ultracold temperatures, where quantum mechanics takes center stage.
Individual molecules were then picked up by a complex system of tightly focused laser beams, so-called “optical tweezers.” By engineering the positions of the tweezers, they were able to create large arrays of single molecules and individually position them into any desired one-dimensional configuration. For example, they created isolated pairs of molecules and defect-free strings of molecules.
Next, they encoded a qubit into a non-rotating and rotating state of the molecule. They were able to show that this molecular qubit remained coherent; that is, it remembered its superposition. In short, the researchers demonstrated the ability to create well-controlled and coherent qubits out of individually controlled molecules.
To entangle the molecules, they had to make the molecule interact. By using a series of microwave pulses, they were able to make individual molecules interact with one another in a coherent fashion.
By allowing the interaction to proceed for a precise amount of time, they were able to implement a two-qubit gate that entangled two molecules. This is significant because such an entangling two-qubit gate is a building block for both universal digital quantum computing and for simulation of complex materials.
The potential of this research for investigating different areas of quantum science is large, given the innovative features offered by this new platform of molecular tweezer arrays. In particular, the Princeton team is interested in exploring the physics of many interacting molecules, which can be used to simulate quantum many-body systems where interesting emergent behavior, such as novel forms of magnetism, can appear.
“Using molecules for quantum science is a new frontier, and our demonstration of on-demand entanglement is a key step in demonstrating that molecules can be used as a viable platform for quantum science,” said Cheuk.
In a separate article published in the same issue of Science, an independent research group led by John Doyle and Kang-Kuen Ni at Harvard University and Wolfgang Ketterle at the Massachusetts Institute of Technology achieved similar results.
“The fact that they got the same results verifies the reliability of our results,” Cheuk said. “They also show that molecular tweezer arrays are becoming an exciting new platform for quantum science.”
The Federal Electronic Health Record Modernization Office, along with ISO – International Organization for Standardization and Centers for Disease Control and Prevention Division of Readiness and Response Science, is proud to publish a new standard that helps countries better prepare for national and international public #health emergencies. The standard helps collect, manage and predict public health emergency preparedness and response. It’s a global game-changer for managing pandemics, outbreaks, toxic exposure and hazardous events. Read more about it at https://lnkd.in/e7qBWWwP. #healthinteroperabilty #healthdatastandards #ISO #GlobalHealthEngagement
ANSI SERVES AS ISO TC 215 SECRETARIAT
In an effort to assure better preparedness for national and international public health emergencies, the International Organization for Standardization (ISO) Technical Committee (TC) 215, Health Informatics, has developed a newly released standard that provides business requirements, terminology, and vocabulary for public health emergency preparedness and response (PH EPR) information systems. The standard is applicable to emergencies that encompass emerging pathogens, including COVID-19, chemical and nuclear accidents, environmental disasters, criminal acts, and bioterrorism.
The international standard, ISO 5477:2023, is relevant to policy makers, regulators, project planners, and management of PH EPR information systems, PH EPR data analysts, and informaticians, and may also be of interest to stakeholders including incident managers, PH educators, standards developers, and academia.
Information that drives a decision-making process is the most critical asset during all phases of PH emergencies. To that end, PH EPR information systems play a critical role in fulfilling major PH emergency response functions, including plans and procedures; physical infrastructure; information and communication technology (ICT) infrastructure; information systems and standards; and human resources.
The standard sets forth business rules for PH EPR information systems, and includes an informative framework for mapping existing semantic interoperability standards for emergency preparedness and response to PH EPR information systems. The document, which included input from 34 nations, was developed based on concepts and methodology described in:
The World Health Organization (WHO) Framework for a Public Health Operations Centre and Supporting WHO Handbooks A and C
ISO 30401, Knowledge management systems requirements
ISO 13054, Knowledge management of health information standards
ISO 22300, Security and resilience vocabulary
ISO 22320, Security and resilience emergency management guidelines for incident management
ISO 1087, Terminology work and terminology science
“This standard is designed to engage all global stakeholders involved in responding to public health emergencies. It fosters collaboration among participants committed to advancing the Global Health Security Agenda through enhanced information exchange,” said Dr. Nikolay Lipskiy, health scientist, Centers for Disease Control and Prevention (CDC), and project leader. “Our primary objective is to reduce barriers to information interoperability, thus improving critical data timeliness and usability. We anticipate that this pioneering standard will pave the way for additional documents focusing on more specific aspects in the near future.”
“It is crucial to acknowledge the invaluable contributions of the participants in our standard development team, with special recognition to the Federal Electronic Health Record Modernization (FEHRM) office,” added Dr. Lipskiy. “Their dedicated efforts have significantly elevated the quality and impact of this standard, demonstrating a collective commitment to advancing global public health infrastructure.”
About the U.S. TAG to ISO/TC 215, Health Informatics
The U.S. TAG to ISO TC 215, Health Informatics, represents national interests on health information technology (HIT) and health informatics standards at ISO. ANSI administers the U.S. TAG to ISO TC 215 to coordinate national standards activities for existing and emerging health sectors. The U.S. TAG is guided by the ANSI cardinal principles of consensus, due process, and openness.
The scope of ISO TC 215, and consequently of the U.S. TAG, is standardization in the field of health informatics, to facilitate capture, interchange, and use of health-related data, information, and knowledge to support and enable all aspects of the health system.
Sometimes the things we do with technology are like smoking on an airplane. ‘It seemed like a good idea at the time.’
Take, for example, the use of AI in predicting criminal behavior. Guilty as charged! I used to do that for the Dutch police, with great success, and everybody involved thought it was a good idea and a fair compromise between fundamental rights and societal benefits. We were right then, but we would be wrong now! The AI Act forbids it. Similarly, we once heralded data as the “new oil,” eagerly linking personal data across marketing databases to forecast human behavior. Only later did we realize the potential harm, prompting the introduction of GDPR to protect personal data.
The just passed(!) European AI Act represents a similar milestone. It reflects a growing understanding that we must more cautiously regulate certain technologies. I must say it scares me a little bit to think of future AI initiatives that later ‘seemed like a good idea at the time’. Autonomous weapons? Multi-purpose household robots? Computers doing all the programming? Fingers crossed for our world. May it all work out great!
About the AI act: just like with every step of this work, it is unfortunately a puzzle to find out exactly what was agreed upon, and we’ll need to wait a few more months for the full text.
Since my focus is on engineering and cyber security: the Act requires high-risk systems to ensure cyber security and also to undergo adversarial testing, with some exceptions. This will require security standards to be ready for use. I happen to be working in CEN/CENELEC on this, supported by the OWASP AI Exchange at owaspai.org. Wish us luck, or better yet, join the AI Exchange or join CEN/CENELEC through your national Standardization organization.
European Union officials have reached a provisional deal on the world’s first comprehensive laws to regulate the use of artificial intelligence.
After 36 hours of talks, negotiators agreed rules around AI in systems like ChatGPT and facial recognition.
The European Parliament will vote on the AI Act proposals early next year, but any legislation will not take effect until at least 2025.
The US, UK and China are all rushing to publish their own guidelines.
The proposals include safeguards on the use of AI within the EU as well as limitations on its adoption by law enforcement agencies.
Consumers would have the right to launch complaints and fines could be imposed for violations.
EU Commissioner Thierry Breton described the plans as “historic”, saying it set “clear rules for the use of AI”.
He added it was “much more than a rulebook – it’s a launch pad for EU start-ups and researchers to lead the global AI race”.
European Commission President Ursula von der Leyen said the AI Act would help the development of technology that does not threaten people’s safety and rights.
In a social media post, she said it was a “unique legal framework for the development of AI you can trust”.
The European Parliament defines AI as software that can “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with”.
ChatGPT and DALL-E are examples of what is called “generative” AI. These programs learn from vast quantities of data, such as online text and images, to generate new content which feels like it has been made by a human.
So-called “chatbots” – like ChatGPT – can have text conversations. Other AI programs like DALL-E can create images from simple text instructions.
Modern enterprises are powered by distributed software applications that need an always-on, secured, responsive and global, optimized access. A secured, hybrid cloud strategy is very important to deliver this application experience for internal and external users. Our vision for hybrid cloud is clear: to help clients accelerate positive business outcomes by building, deploying and managing applications and services anytime, anywhere.
Traditional CloudOps and DevOps models that involve manual workflows may not deliver the required application experience. IBM strongly believes it’s time for a new approach, one driven by the applications themselves. The new paradigm is to simplify hybrid and multicloud application delivery with secured, performant application-centric networking, to help increase application velocity and improve collaboration between IT teams.
Just as cloud can provide a virtual platform to consume underlying resources like compute and storage, app-centric connectivity offers a new network overlay focused on application and service endpoints connectivity. It’s totally abstracted from the underlying networks that provide physical connectivity, and hence is highly simplified.
How can application-centric connectivity help IT teams? For CloudOps teams, this approach helps achieve visibility and optimization. For DevOps, it helps achieve business agility. Both teams can benefit from better team collaboration with a common user experience (UX), custom topology views and the ability to manage and view SLOs and resource status.
The new network paradigm in action: IBM Hybrid Cloud Mesh
IBM Hybrid Cloud Mesh, a multicloud networking solution announced earlier this year, is now available. This new SaaS product is designed to allow organizations to establish simple, scalable secured application-centric connectivity. The product is also designed to be predictable with respect to latency, bandwidth and cost. It is engineered for both CloudOps and DevOps teams to seamlessly manage and scale network applications, including cloud-native ones running on Red Hat OpenShift.
You’ll find a seamless on-ramp for applications and services across heterogeneous other environments; for example, when combining Hybrid Cloud Mesh with DNS traffic steering capabilities of IBM NS1 Connect, a SaaS solution for content, services and application delivery to millions of users.
Architecture of IBM Hybrid Cloud Mesh:
Two main architecture components are key to how the product is designed to work:
The Mesh Manager provides the centralized management and policy plane with observability.
Gateways implement the data plane of Hybrid Cloud Mesh and act as virtual routers and connectors. These are centrally managed through Mesh Manager and deployed both in the cloud and on customer premises. There are two types of gateways: 1) Edge Gateway, deployed near workloads for forwarding, security enforcement, load balancing, and telemetry data collection; and 2) Waypoint, deployed at Points of Presence (POPs) close to internet exchanges and colocation points for path, cost and topology optimization
Key features of IBM Hybrid Cloud Mesh:
Continuous infrastructure and application discovery: Mesh Manager continuously discovers and updates multicloud deployment infrastructure, making the discovery of deployed applications and services an automated experience. Continuous discovery allows Mesh Manager to maintain awareness of changes in the cloud assets.
Seamless connectivity: DevOps or CloudOps can express their connectivity intent through the UI or CLI, and Mesh connects the specified workloads, regardless of their location.
Security: Built on the principles of zero-trust, Mesh allows communication based on user intent only. All gateways are signed, and threat surface is addressed since they can be configured only through Mesh Manager.
Observability: Mesh provides comprehensive monitoring through the Mesh Manager day0/day1 UI, offering details on deployment environments, gateways, services and connectivity metrics.
Traffic Engineering Capabilities:Leveraging waypoints, Hybrid Cloud Mesh is designed to optimize paths for cost, latency and bandwidth, to enhance application performance and security.
Integrated Workflows: DevOps, NetOps, SecOps and FinOps workflows unite in a symphony of collaboration, providing end-to-end application connectivity through a single, harmonious pane of glass.
Take the next step with Hybrid Cloud Mesh
We are excited to showcase a tech preview of Hybrid Cloud Mesh supporting the use of Red Hat Service Interconnect gateways simplifying application connectivity and security across platforms, clusters and clouds. Red Hat Service Interconnect, announced 23 May 2023 at Red Hat Summit, creates connections between services, applications and workloads across hybrid necessary environments.
We’re just getting started on our journey building comprehensive hybrid multicloud automation solutions for the enterprise. Hybrid Cloud Mesh is not just a network solution; it’s engineered to be a transformative force that empowers businesses to derive maximum value from modern application architecture, enabling hybrid cloud adoption and revolutionizing how multicloud environments are utilized. We hope you join us on the journey.
I’m COO at Unstoppable Domains, a Web3 digital identity platform
I was recently engaged in a conversation about my work on Watson back in 2015 when a comment I received confused with me: “AI just started! Was ChatGPT created in 2015?” I realized then that many people didn’t realize the depth of AI history.
First of all, what is AI? Artificial Intelligence, a specialty within computer science, focuses on creating systems that replicate human intelligence and problem-solving abilities. These systems learn from data, process information, and refine their performance over time, distinguishing them from conventional computer programs that require human intervention for improvement.
The landscape of artificial intelligence (AI) is a testament to the relentless pursuit of innovation by technologists that have shaped its trajectory. In this journey, key players have emerged, each contributing to the evolution of AI in unique and transformative ways.
The Way Back When
The history of AI can be traced back to the 1950s, when Alan Turing published “Computer Machinery and Intelligence,” introducing the Turing Test as a measure of computer intelligence. At the same time, John McCarthy, a “founding father” of AI, created LISP, the first programming language for AI research which is still used today
A period of low interest and funding in AI followed due to setbacks in the machine market and expert systems. Despite decreased funding, the early 90s brought AI into everyday life with innovations like the Roomba and speech recognition software.
The surge in interest was followed by a new funding for research, which allowed even more progress to be made.
IMPACT IBM (1997-2011)
IBM started its AI adventure when Deep Blue beat the world chess champion, Gary Kasparov, in a highly-publicized match in 1977, becoming the first program to beat a human chess champion. In 2011, IBM then created Watson, a Question Answering (QA) systems. Watson went on to win Jeopardy! against two former champions in a televised game. Recognized globally for this groundbreaking performance on Jeopardy! Watson showcased the transformative potential of cognitive computing.
Innovation in this period revolutionized industries such as healthcare, finance, customer service, and research. In the healthcare sector, Watson’s cognitive capabilities proved instrumental for medical professionals, offering support in diagnosing and treating complex diseases. By analyzing medical records, research papers, and patient data, it facilitated precision medicine practices, empowering healthcare practitioners with valuable insights into potential treatment options. Watson’s transformative impact extended to customer service, where it reshaped interactions through the provision of intelligent virtual assistants.
Watson’s influence also reached the realm of research and development, empowering researchers to analyze vast amounts of scientific literature. This catalyzed the discovery of new insights and potential breakthroughs by uncovering patterns, correlations, and solutions hidden within extensive datasets.
“Watson was one of the first usable AI engines for the Enterprise,” said Arvind Krishna, CEO of IBM. “IBM continues to drive innovation in AI and Generative AI to help our customers move forward.”
Watson’s legacy is profound, showcasing the formidable power of AI in understanding human language, processing vast datasets, and delivering valuable insights across multiple industries. Its pioneering work in natural language processing and cognitive computing set the stage for subsequent innovations like ChatGPT, marking a transformative era in the evolution of artificial intelligence. IBM continues to innovate in AI today.
The Assistants and Beyond: Amazon AMZN and Apple AAPL(2011-2014)
Amazon’s AI foray with Alexa and Apple’s Siri both marked significant leaps in human-computer interaction. These voice-controlled virtual assistants transformed how users can access information, control environments, and shop online, showcasing AI’s potential for daily life improvement.
Beyond voice assistance, Amazon leverages AI for personalized recommendations on its e-commerce platform, enhancing the customer shopping experience. Additionally, the company’s robust cloud computing service, Amazon Web Services (AWS), provides scalable and efficient infrastructure for AI development, enabling businesses and developers to leverage cutting-edge machine learning capabilities. Amazon’s commitment to advancing AI technologies aligns with its vision of making intelligent and intuitive computing accessible to users in various aspects of their daily lives, from the living room to the online marketplace.
Google GOOG: Deep Learning Breakthroughs (2012-2019)
Google, synonymous with innovation, has been a driving force in AI research. The development of DeepMind, a subsidiary of Google, marked a turning point with groundbreaking achievements in deep learning and reinforcement learning. For example, two researchers from Google (Jeff Dean and Andrew Ng) trained a neural network to recognize cats by showing unlabeled images with no background information.
Google’s commitment to democratizing AI is evident through TensorFlow, an open-source machine learning library empowering developers worldwide to create and deploy AI applications efficiently. Additionally, Google’s advancements in natural language processing, image recognition, and predictive algorithms have shaped the landscape of AI applications across diverse domains with Google Search, Photos, and Assistant demonstrating a commitment to enhancing user experiences and making AI an integral part of daily life.
OpenAI: Expanding the Horizons of Natural Language Processing (2020- present)
Generative AI (GEN AI) refers to a category of artificial intelligence systems designed to generate content, often in the form of text, images, or other media, that is contextually relevant and resembles content created by humans. Unlike traditional AI models that may follow pre-programmed rules or make predictions based on existing data, generative AI can produce original and diverse outputs.
One prominent example of generative AI is OpenAI’s GPT (Generative Pre-trained Transformer) series, including models like GPT-3. In late 2022, ChatGPT made headlines by attracting 1 million users within a week of its launch. By early November, the platform had amassed over 200 million monthly users, showcasing the significant impact of OpenAI’s innovations on the AI landscape.
This success story underscores OpenAI’s commitment to advancing NLP and its ability to deliver platforms that resonate with a vast user base. The influence extends beyond user metrics; it has played a pivotal role in the development of subsequent innovations like GPT-3, reinforcing OpenAI’s position as a trailblazer in conversational AI. Microsoft’sMSFT strategic investment further validates OpenAI’s crucial role in the evolution of AI and NLP, signifying industry-wide recognition of its contributions.
On November 6th, OpenAI introduced GPTs, custom iterations of ChatGPT, which amalgamate instructions, extended knowledge, and actionable insights. The launch of the assistants API facilitates the seamless integration of assistant experiences with individual applications. These advancements are viewed as foundational steps toward the realization of AI agents, with OpenAI committed to enhancing their capabilities over time. The introduction of the new GPT-4 turbo model brings forth improvements in function calling, knowledge incorporation, pricing adjustments, support for new modalities, and more. Additionally, OpenAI now provides a copyright shield for enterprise clients, exemplifying their ongoing commitment to innovation and client support in the evolving landscape of generative AI.
Built for safety: Anthropic (2021- present)
Anthropic is an AI safety startup founded in 2021 that leverages constitutional AI, an approach to train models to be helpful, harmless, and honest. The research team then developed CLAIRE, a large language model trained using the same constitutional basis. Anthropic’s’s research-driven approach and focus on AI safety place them at the forefront of developing responsible and beneficial AI systems as does the use of techniques like data filtering and controlled training environments to avoid biases or errors. Focused on common sense — the AI understands intuitive physics, psychology, and social norms, allowing it to give reasonable answers. Both Google and Amazon have invested in Anthropic.
Intel INTC and NVIDIA NVDADIA: Powering the AI Revolution Through Hardware
Companies like Intel and NVIDIA have played a crucial and often underestimated role in the AI landscape by providing the hardware infrastructure that underpins remarkable advancements. Their development of powerful processors and graphics processing units (GPUs) optimized for machine learning tasks has been pivotal,facilitating not only the training but also the deployment of complex AI models, accelerating the pace of innovation in the field.
What’s Next? Responsible AI
Responsible AI refers to the design, development, and deployment of AI systems in an ethical and socially-aware manner. As AI becomes more powerful and ubiquitous, practitioners must consider the impacts these systems have on individuals and society. Responsible AI encompasses principles such as transparency, explainability, robustness, fairness, accountability, privacy, and human oversight.
Developing responsible AI systems requires proactive consideration of ethical issues during all stages of the AI lifecycle. Organizations should conduct impact assessments to identify potential risks and harms, particularly for marginalized groups. Teams should represent diverse perspectives when designing, building, and testing systems to reduce harmful bias which is why companies like Credo AI have jumped in to focus on a responsible framework.
Credo AI is an AI governance platform that streamlines responsible AI adoption by automating AI oversight, risk mitigation, and regulatory compliance. Their founder and CEO, Navrina Singh commented, “The next frontier for AI is responsible AI. We must remain steadfast in mitigating the risks associated with artificial intelligence.”
What’s Next? The Physical World
Spatial AI and robotic vision represent an evolution in how artificial intelligence systems perceive and interact with the physical world. By integrating spatial data like maps and floorplans with computer vision, itallows robots and drones to navigate and operate safely. Robotic vision systems can now identify objects, read text, and interpret scenes in 3D space, giving robots unprecedented awareness of their surroundings and the mobility to take on more complex real-world tasks.
New spatial capabilities are unlocking tremendous economic potential beyond the $17B already raised for AI vision startups. Warehouse automation, last-mile delivery, autonomous vehicles, and advanced manufacturing are all powered by spatial AI and computer vision, and over time they will offer more dynamic and versatile interactions with physical environments. This could enable revolutionary applications, from robot-assisted surgery to fully-autonomous transportation.
What’s Next? Customer-Centric AI Apps:
There is a shift happening right now in AI, from technology-centric solutions that solve infrastructure problems to customer-centric applications that solve our real-world human problems comprehensively.This evolution signals a move beyond the initial excitement of AI’s capabilities to a phase where technology meets the diverse and complex needs of end-users.
While the concept of a data flywheel remains relevant (reminder of the FlyWheel – more usage → more data → better model → more usage) , the sustainability of data moats is on shaky ground. The real moats, it seems, are in the customers themselves. Engagement depth, productivity benefits, and monetization strategies are emerging as more durable sources of competitive advantage.
Hassan Sawaf, CEO and Founder of AIXplain, highlights the power of a customer focused engagement approach. “We believe in AI agents that can help swiftly craft personalized solutions by leveraging cutting-edge technology from leading AI providers in real time. That’s what we’ve created with Bel Esprit and 40,000 state-of-the-art models with the capability to onboard hundreds of thousands more from platforms with proprietary sources in minutes. One click is all it takes for deployment, making Bel Esprit a game-changer in the AI landscape.”
Reflection
As we reflect on the success of what is now recognized as GEN AI, it is imperative to acknowledge the collective contributions of the tech titans that I’ve mentioned. Their advancements in deep learning, natural language processing, accessibility, and hardware infrastructure have not only shaped the trajectory of AI, but have also ushered in an era where intelligent technologies play an integral role in shaping our digital future.
Celebrating the pioneers in AI emphasizes their innovative spirit and unwavering commitment to advancing the boundaries of technology, paving the way for the sophisticated AI models that define our current era. A round of applause for the diligent technologists working at these companies.
I’m COO at Unstoppable Domains, and Alumni of AWS and IBM. I’m also a chairwoman on the board of the nonprofit Girls in Tech, a former member of the Diversity Committee at the World Economic Forum, and currently a founding member of the Blockchain Friends Forever social movement for women in Web3. I hold and trade modest amounts of ETH and BTC. These days I’m passionate about enterprise use cases for decentralized technologies. My latest book “The Tiger and the Rabbit” is shipping on 8/30! It’s. Business Fable about AI, Web3 and the Metaverse!