January 7, 2022 | Podcast
Companies at the forefront of the technology frontier are empowering their workers with digital technologies—and the skills they need to use them.
The Fourth Industrial Revolution will be people powered
Open interactive popup Article (8 pages)
For many members of the world’s workforces, change can sometimes be seen as a threat, particularly when it comes to technology. This is often coupled with fears that automation will replace people. But a look beyond the headlines shows that the reverse is proving to be true, with Fourth Industrial Revolution (4IR) technologies driving productivity and growth across manufacturing and production at brownfield and greenfield sites. These technologies are creating more and different jobs that are transforming manufacturing and helping to build fulfilling, rewarding, and sustainable careers. What’s more, with 4IR technologies in the hands of a workforce empowered with the skills needed to use them, an organization’s digital transformation journey can move from aspiration to reality.
In this special edition of the McKinsey Talks Operationspodcast, host Daphne Luchtenberg brings you highlights from a panel discussion on the importance of building workforce capabilities and shifting mindsets for successful digital transformation. The discussion took place recently as part of Lighthouses Live, the flagship event of the Global Lighthouse Network—a World Economic Forum initiative in collaboration with McKinsey & Company.
The conversation was led by Francisco Betti, head of advanced manufacturing and value chains and member of the Executive Committee at the World Economic Forum. It also featured Revathi Advaithi, CEO of Flex; Robert Bodor, president and CEO of Protolabs; and David Goeckeler, CEO of Western Digital. The following is an edited version of their conversation.
Daphne Luchtenberg: In this new world of work, the impact of technology means new skills and new roles are emerging as fast as other roles change.
David Goeckeler: You know, change can be opportunity for everybody. So I think we look at it through that lens. Change doesn’t have to be a threat; it’s just the opposite.
Daphne Luchtenberg: I’m Daphne Luchtenberg, one of your hosts for McKinsey Talks Operations, and that was David Goeckeler, CEO of Western Digital.
His comments were part of a conversation about the use of digital technologies in manufacturing and production, and how there is a need for training and development programs to teach workers the skills to use [these technologies].
So while there is a common perception that digitization and automation are a threat to the world’s workers, companies at the forefront of the technology frontier have actually created jobs—different, new roles that are much more high tech than the roles of the past.
And with the current labor mismatch being felt in many countries, the time is now to further engage workers for a digitally enabled future. The Global Lighthouse NetworkExplore the collection
This focus is backed by growing research proving that workforce engagement is key. Over the last several years, research with the World Economic Forum, in collaboration with McKinsey, surveyed thousands of manufacturing sites on their way to digitizing operations and have identified about 90 leaders. These are the lighthouses—sites and supply chains chosen by an independent panel of experts for leadership in creating dramatic improvements with technology. Together they create the Global Lighthouse Network, committed to sharing what they’ve learned along the way. A common theme among these sites is their worker centricity—they are supporting the frontline workforce, upskilling, and making jobs easier and more interesting.
In this special edition of McKinsey Talks Operations, we’ll hear from the CEOs of a few of these leading companies about how they are engaging their people and putting technology in the hands of the workforce. The conversation originally took place during Lighthouses Live, a recent event of the Global Lighthouse Network. The discussion is led by Francisco Betti, at the World Economic Forum.
Let’s listen in.
Francisco Betti: I am delighted to be joined by an impressive group of leaders from our Global Lighthouse Network: Revathi Advaithi, chief executive officer of Flex; Robert Bodor, president and CEO of Protolabs; and David Goeckeler, chief executive officer of Western Digital.
Revathi, Robert, David—a very warm welcome, and thank you for joining us today. We have an exciting conversation ahead of us. We will discuss how you are shaping the future direction of your companies by leveraging Fourth Industrial Revolution technologies and empowering and engaging your people.
Revathi Advaithi: The most important thing is that we’re a company of people. We’re 165,000 people in 30 countries. And I’m a big believer that culture is at the forefront of everything we do. And great manufacturing comes because you have a great culture.
My belief is that the recognition of [the Flex factory in Althofen, Austria] as a lighthouse site is because they have a fantastic culture—a culture that’s focused on innovation, that is very ready to embrace change, is willing to learn from other companies across the world. So it’s such an amazing recognition for that particular site. And it really opens up the avenue for every Flex manufacturing site to really strive to be at the level that Althofen is and to be at the level of the other 90 manufacturing sites that are lighthouse-recognized.
So we are very, very excited about it. We think that this is the start of using the Fourth Industrial Revolution to really build on the capability of our sites, and just build a sustainable manufacturing legacy for Flex.
Francisco Betti: Western Digital has also joined the Global Lighthouse Network with two sites this year—one in Penang, Malaysia, and the other in Prachinburi, Thailand.
In your lighthouses, we have seen success driven by a combination of technology and people. Can you share how Western Digital has been keeping people at the center of its digital transformation journey to realize its full potential?
David Goeckeler: Keeping people at the center is actually pretty straightforward because people are the number-one priority in our operations. We work in a very dynamic market, and we know that our teams, and the skill of our teams, is really what’s going to define our success in the future. So keeping them at the center is critical. And it’s not just the operations team; it’s everybody in the company. We have over 60,000 employees—from the people in operations all the way to the executive team—and everybody is involved and behind this exciting effort. So keeping our people, reskilling our people, building that future-ready workforce, is what’s critical for us, but also for our employees.
Any time in life when you learn new skills, when you educate yourself, I think you have the opportunity to live a better life. It’s not just about our company being better and us being prepared for the future; it’s about all of our employees being ready for that future—keeping them at the center, having them highly engaged, all of the reskilling, getting them excited about what the future holds.
This isn’t some kind of executive mandate; it’s the employees leading it, pulling the company to it. Keeping them all deeply engaged keeps them directly at the center of what we’re doing. And, as I said, having our employees fully engaged, really building that future-ready workforce, is going to be what defines the success of Western Digital.
Francisco Betti: Thank you very much, David. It’s great to hear about the importance of culture and people from both you and Revathi.
Let me ask you a follow-up question. What advice you would give to those companies that are still stuck in pilot purgatory and are trying to scale digital transformations?
David Goeckeler: First of all, what we just talked about is workforce engagement. It’s got to be a pull, the workforce has to be fully engaged, you have to take the time to train and explain all the things about what success is going to mean for everybody. And you have to get that alignment from the shop floor all the way to the executive team on what going to a new model is going to deliver. And, as I said, not just for the business, but for all the individuals.
This is a new world. In manufacturing, there’s going to be a lot of fast and big data. Make sure you have a scalable industrial IoT stack that’s going to be able to handle that and be ready.David Goeckeler
Then I would point people to infrastructure readiness. This is a new world. In manufacturing, there’s going to be a lot of fast and big data. Make sure you have a scalable industrial IoT [Internet of Things] stack that’s going to be able to handle that and be ready.
So first make sure the workforce is engaged. Make sure the infrastructure is ready so that you don’t run into roadblocks. And then really prioritize. Pick use cases that are going to have a big impact. As the team says, “Think big, start small, and then scale fast.”
We’ve had a lot of success doing that—picking use cases that are going to have big business impacts. People see the value. You start to build momentum. And once you get some momentum going, it’s easier to keep it going and build faster and more of it. So, again, workforce engagement, infrastructure readiness, and then start with some prioritized use cases. Start small but think big. And then scale as fast as you can.
Francisco Betti: That is great advice, David. Thank you.
Revathi, let me come back to you now. Flex’s lighthouse in Austria was facing tough competition from lower-cost regions. However, your teams were able to leverage technology to build a more attractive product lineup. What are the key lessons your company learned from this? How does it inform your future strategy?
Revathi Advaithi: When you walk into our Althofen site, the first thing you notice is the “can do” culture. As the world went through labor arbitrage and manufacturing moving to more competitive regions of the world, Althofen has been a thriving site that has focused on using technology as a competitive advantage.
We have a site that is very well trained in terms of skilling. They’re able to skill and reskill, like David talked about, at an amazing pace with really good change. And the second is, tremendous resiliency. They’re able to bring up new products at a fast pace versus any other site that I’m aware of just because they have that spirit of innovation and the focus on technology.
Pretty much any complexity of product, they’re able to bring into their facility and scale up for a customer, and really respond to any of the market dynamics present. All of this has resulted in a site that’s having tremendous rigor—operational rigor—lots of agility, in terms of how they operate.
The results have been incredible for that site. They’ve had tremendous revenue growth while improving margins. But most importantly, they’ve made some sustainable change, which I really love. CO2 emissions have improved significantly for that site. And we have driven reductions, in terms of our travel costs and those things in that site, just by use of technology—whether you’re thinking about simulation or any of those other technologies that have been used.
Francisco Betti: Thank you, Revathi. Amazing achievements.
Robert, this seems like the perfect opportunity to bring you in. Firstly, many congratulations for the recognition of your Plymouth site as a lighthouse—Protolabs’ first lighthouse in our global network.
As a medium-size enterprise, you embarked on an amazing journey to transition from providing prototypes to becoming an at-scale production supplier—and you did that by incrementally developing new digital capabilities.
What did you do to further accelerate your 4IR journey, considering your company was already a digital native?
Robert Bodor: As you alluded to, Protolabs was founded over two decades ago with a digital mindset from the start. We began as an injection-molding company looking to transform the traditional manufacturing process. Our mission was to automate traditional manufacturing in order to provide molded parts in days at a fraction of the price of traditional molders.
Over time, we extended this digitalization approach to other services, including CNC [computerized numerical control] machining, sheet metal fabrication, and 3-D printing. So, Revathi, you’re right, we love additive manufacturing at Protolabs.
As our name implies, we targeted engineers, who had needs for prototypes to begin with. But over time, we found that our customers were using us for production-part needs and that they valued us for our quality, our reliability, and our willingness to make parts on demand with no minimum-order quantities, so that they could virtualize their inventory and reduce their supply-chain risks, especially in times when demand was volatile.
So that realization was really key for us. And that launched the 4IR journey that you mentioned, Francisco, from being a prototype provider to, now, also a production provider. To do that, we had to extend our digital thread, which connects our online quoting platform to the shop floor and to the customer.
We already had end-to-end automation in place that allowed us to make a mold from scratch and shoot molded parts in one to 15 days. But now, we needed to extend that for these production applications. So we adopted 4IR technologies to expand that system. And it included things like processed automation, digital-part inspection and validation, and process control, which included implementing an industrial IoT stack that allows us to conduct real-time monitoring of our mold presses and associated equipment. And then close the loop in all of that.
All of this expanded the digital thread and the digital twin of key elements of our production processes so that we truly had this end-to-end connection from the online quote all the way through the production process and, ultimately, to the customer.
Lastly, we also implemented a scaled agile development framework, because software is at the core of our business and what we do. And this framework allowed several hundred software developers who are serving our injection-molding business to be able to be agile and coordinated at that scale and to respond to the needs of the plant and to the customers as they evolved.
Francisco Betti: Excellent. Thank you for sharing that, Robert. It sounds like an amazing journey. David, coming back to you now, and I’d like to focus once again on the importance of people.
Your lighthouses in Thailand and Malaysia have several thousand workers, and you’ve focused heavily on upskilling and reskilling. In fact, in Thailand, 60 percent of your workforce was reskilled to support and accelerate technology adoption. And that resulted in zero job losses, which is just fantastic.
How are you turning this approach of reskilling at scale into a competitive advantage for your company?
David Goeckeler: Our successes depended on our people. And let me give a little bit of background on what these people are building. Western Digital is a diversified storage company. An easy way to think about us is, 40 percent of the data that’s stored in the world is stored on a device that our team built.
That’s kind of an amazing stat: 40 percent of the data in the world that’s stored is stored on a device that these teams built. And the demand for that storage is increasing at a 35 percent yearly compounded annual growth rate. So there are plenty of things to do, and the technology allows us to build that.
And it’s our responsibility to equip and empower that team for our short-term and our long-term success. This is a very large imperative that we have a workforce that’s ready for the future that we’re building. We have thousands of engineers who are designing the products of the future that are going to enable the digital economy we all live in. Making sure we have a workforce that’s ready to build that technology is critically important to us.
So it’s really about making Western Digital the employer of choice in the regions that you saw. And that’s about that stronger workforce engagement—training them, letting people know that when you come to Western Digital, you’re not just going to do the job you have today, but you’re going to learn new skills.
We’re able to take our very experienced employees and our workforce that really knows how our business works and bring them into the future, and at the same time attract new people into the business. So I think it’s a win for everybody, and it’s been a great journey and a tremendous success.
Francisco Betti: Thank you, David. Robert, can I ask you what your thoughts are here?
Robert Bodor: I would agree with David’s comments. And furthermore, I would add that the manufacturing industry today, particularly the American manufacturing industry, is experiencing a severe labor shortage. And this has potential long-term implications.
A National Association of Manufacturers study indicated that over two million manufacturing jobs could go unfilled by 2030. As a digital manufacturer, we’ve worked to automate a great deal of our manufacturing process, which allows us to be more efficient with our workforce. And that’s one of the competitive advantages that’s coming to us from our 4IR initiatives.
However, our employees are absolutely critical to our success. So the challenge is real. And at Protolabs, we’re dedicated to creating what we hope are long-term career opportunities for our employees on the shop floor. And that requires considerable investment in creating learning opportunities that will help them grow.
We’ve put a really concerted focus on upskilling our employees to ensure that they’re able to grow in their careers and develop the skills that are vital in this Fourth Industrial Revolution. But for us, that includes things like in-house training and certification programs for key roles, like our mold technicians, for example.
Our online learning portal offers hundreds of courses that can help our employees to grow. [We provide] tuition reimbursement for continued learning opportunities at universities and trade schools. Further, we really work to incorporate technology on the job so that we can improve the employee experience on the manufacturing floor and support their on-the-job training through technology.
Ultimately, our goal is to ensure that our employees have the path to become experts in the modern best-practice methods that we’re using, such as scientific molding in the case of Plymouth, and also to grow other skills, like A3 problem solving, change management, leadership development.
Francisco Betti: Excellent, Robert. Thank you. Revathi, one final question to you. At Flex, we have seen your incredible efforts to reskill almost the entire IT team and your shop floor operators. They are all smart manufacturing experts by now.
How are you thinking about Fourth Industrial Revolution upskilling programs as part of your future strategy?
It’s core for the survival of companies, and, more importantly, its core for our people strategy, because the best way to keep our employees, our colleagues, excited about what they do is to make sure that they are at the forefront of every technology they use.Revathi Advaithi
Revathi Advaithi: Francisco, just like Robert and David talked about this, I think it’s core for the survival of companies, and, more importantly, its core for our people strategy, because the best way to keep our employees, our colleagues, excited about what they do is to make sure that they are at the forefront of every technology they use.
I’ll give you an example. The facility here in Austin typically makes a lot of technology products, whether it is storing security products, things like that. But recently, we had to start moving a lot of medical products into Austin.
One reason for this is because it’s a fantastic location to have. But two is because we also have a great team there. But the team had to really change their entire mindset. They had to learn a fully automated, wholly sophisticated set of equipment and how to run it, and really pick up new skills that they didn’t have before, including FDA [US Food and Drug Administration] compliance for a lot of regulatory issues.
But we were able to train the team based on other sites, learn from them, and really change the competency of this site in the last couple of years. Althofen, the site that is recognized as a lighthouse today, has done that time and time again, many times over.
We have a system called Pulse that we deploy across the organization. Pulse, truly, is the heartbeat of the organization. Althofen was one of the first sites that deployed Pulse. They know in real time exactly where all the product is—what is coming in, what is leaving, how much inventory is in the system—so they can give real-time updates to the customer to provide them a seamless transition.
The idea of all those sites was “unless we learn first and we get to the table first, it is survival of the fittest and the best team wins,” right? So we are able to have sites that have the culture of “we want to be the best.” And what has been amazing about [the Global Lighthouse Network] is we get the ability to benchmark and learn from other sites, then bring it in, and then really reskill our workforce.
Francisco Betti: There are millions of facilities and companies around the world that we want to reach and engage in the unique learning opportunity the Global Lighthouse Network provides. Our network will continue to grow, and we invite you all to reach out to us to be able to experience the journey toward becoming a lighthouse.
Daphne Luchtenberg: That was a great discussion, and thank you again to our panelists and our colleagues at the World Economic Forum for an insightful event. Once again, organizations are selected to be part of the Global Lighthouse Network based on their leadership and willingness to share their insights. If you are inspired to begin your own Lighthouse Learning Journey, we invite you to learn more on McKinsey.com/GLN, or on the World Economic Forum website.
This program is just one in a series that considers the challenges that companies and economies are facing, as well as the opportunities that leaders can seize for competitive advantage. We will explore other important topics, such as how to connect boardroom strategy to the front lines, where and when to infuse operations with technology, and why empowering the workforce with skills and capabilities is key to success.
ABOUT THE AUTHOR(S)
Revathi Advaithi is the CEO of Flex. Francisco Betti is the head of advanced manufacturing and value chains and a member of the Executive Committee at the World Economic Forum. Robert Bodor is the president and CEO of Protolabs. David Goeckeler is the CEO of Western Digital. Daphne Luchtenberg a director of reach and relevance in McKinsey’s London office.
January 5, 2022 by TIFFANY YEUNG
Public cloud computing platforms allow enterprises to supplement their private data centers with global servers that extend their infrastructure to any location and allow them to scale computational resources up and down as needed. These hybrid public-private clouds offer unprecedented flexibility, value and security for enterprise computing applications.
However, AI applications running in real time throughout the world can require significant local processing power, often in remote locations too far from centralized cloud servers. And some workloads need to remain on premises or in a specific location due to low latency or data-residency requirements.
This is why many enterprises deploy their AI applications using edge computing, which refers to processing that happens where data is produced. Instead of cloud processing doing the work in a distant, centralized data reserve, edge computing handles and stores data locally in an edge device. And instead of being dependent on an internet connection, the device can operate as a standalone network node.
Cloud and edge computing have a variety of benefits and use cases, and can work together.
What Is Cloud Computing?
According to research firm Gartner, “cloud computing is a style of computing in which scalable and elastic-IT-enabled capabilities are delivered as a service using Internet technologies.”
There are many benefits when it comes to cloud computing. According to Harvard Business Review’s “The State of Cloud-Driven Transformation” report, 83 percent of respondents say that the cloud is very or extremely important to their organization’s future strategy and growth.
Cloud computing adoption is only increasing. Here’s why enterprises have implemented cloud infrastructure and will continue to do so:
- Lower upfront cost – The capital expense of buying hardware, software, IT management and round-the-clock electricity for power and cooling is eliminated. Cloud computing allows organizations to get applications to market quickly, with a low financial barrier to entry.
- Flexible pricing – Enterprises only pay for computing resources used, allowing for more control over costs and fewer surprises.
- Limitless compute on demand – Cloud services can react and adapt to changing demands instantly by automatically provisioning and deprovisioning resources. This can lower costs and increase the overall efficiency of organizations.
- Simplified IT management – Cloud providers provide their customers with access to IT management experts, allowing employees to focus on their business’s core needs.
- Easy updates – The latest hardware, software and services can be accessed with one click.
- Reliability – Data backup, disaster recovery and business continuity are easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.
- Save time – Enterprises can lose time configuring private servers and networks. With cloud infrastructure on demand, they can deploy applications in a fraction of the time and get to market sooner.
What Is Edge Computing?
Edge computing is the practice of moving compute power physically closer to where data is generated, usually an Internet of Things device or sensor. Named for the way compute power is brought to the edge of the network or device, edge computing allows for faster data processing, increased bandwidth and ensured data sovereignty.
By processing data at a network’s edge, edge computing reduces the need for large amounts of data to travel among servers, the cloud and devices or edge locations to get processed. This is particularly important for modern applications such as data science and AI.
What Are the Benefits of Edge Computing?
According to Gartner, “Enterprises that have deployed edge use cases in production will grow from about 5 percent in 2019 to about 40 percent in 2024.” Many high compute applications such as deep learning and inference, data processing and analysis, simulation and video streaming have become pillars for modern life. As enterprises increasingly realize that these applications are powered by edge computing, the number of edge use cases in production should increase.
Enterprises are investing in edge technologies to reap the following benefits:
- Lower latency: Data processing at the edge results in eliminated or reduced data travel. This can accelerate insights for use cases with complex AI models that require low latency, such as fully autonomous vehicles and augmented reality.
- Reduced cost: Using the local area network for data processing grants organizations higher bandwidth and storage at lower costs compared to cloud computing. Additionally, because processing happens at the edge, less data needs to be sent to the cloud or data center for further processing. This results in a decrease in the amount of data that needs to travel, and in the cost as well.
- Model accuracy: AI relies on high-accuracy models, especially for edge use cases that require real-time response. When a network’s bandwidth is too low, it is typically alleviated by lowering the size of data fed into a model. This results in reduced image sizes, skipped frames in video and reduced sample rates in audio. When deployed at the edge, data feedback loops can be used to improve AI model accuracy and multiple models can be run simultaneously.
- Wider reach: Internet access is a must for traditional cloud computing. But edge computing can process data locally, without the need for internet access. This extends the range of computing to previously inaccessible or remote locations.
- Data sovereignty: When data is processed at the location it is collected, edge computing allows organizations to keep all of their sensitive data and compute inside the local area network and company firewall. This results in reduced exposure to cybersecurity attacks in the cloud, and better compliance with strict and ever-changing data laws.
What Role Does Cloud Computing Play in Edge AI?
Both edge and cloud computing can take advantage of containerized applications. Containers are easy-to-deploy software packages that can run applications on any operating system. The software packages are abstracted from the host operating system so they can be run across any platform or cloud.
The main difference between cloud and edge containers is the location. Edge containers are located at the edge of a network, closer to the data source, while cloud containers operate in a data center.
Organizations that have already implemented containerized cloud solutions can easily deploy them at the edge.
Often, organizations turn to cloud-native technology to manage their edge AI data centers. This is because edge AI data centers frequently have servers in 10,000 locations where there is no physical security or trained staff. Consequently, edge AI servers must be secure, resilient and easy to manage at scale.
Learn more about the difference between developing AI on premises rather than the cloud.
When to Use Edge Computing vs Cloud Computing?
Edge and cloud computing have distinct features and most organizations will end up using both. Here are some considerations when looking at where to deploy different workloads.Show 102550100 entriesSearch:Cloud ComputingEdge ComputingNon-time-sensitive data processingReal-time data processingReliable internet connectionRemote locations with limited or no internet connectivityDynamic workloadsLarge datasets that are too costly to send to the cloudData in cloud storageHighly sensitive data and strict data lawsShowing 1 to 4 of 4 entriesPreviousNext
An example of a situation where edge computing is preferable over cloud computing is medical robotics, where surgeons need access to real-time data. These systems incorporate a great deal of software that could be executed in the cloud, but the smart analytics and robotic controls increasingly found in operating rooms cannot tolerate latency, network reliability issues or bandwidth constraints. In this example, edge computing offers life-or-death benefits to the patient.
Discover more about what to consider when deploying AI at the edge.
The Best of Both Worlds: A Hybrid Cloud Architecture
For many organizations, the convergence of the cloud and edge is necessary. Organizations centralize when they can and distribute when they have to. A hybrid cloud architecture allows enterprises to take advantage of the security and manageability of on-premises systems while also leveraging public cloud resources from a service provider.
A hybrid cloud solution means different things for different organizations. It can mean training in the cloud and deploying at the edge, training in the data center and using cloud management tools at the edge, or training at the edge and using the cloud to centralize models for federated learning. There are limitless opportunities to bring the cloud and edge together.
Learn more about NVIDIA’s accelerated compute platform, which is built to run irrespective of where an application is — in the cloud, at the edge and everywhere in between.
by Nick TaslerSeptember 21, 2016
Change is an unavoidable constant in our work lives. Sometimes it’s within our control, but most often it’s not. Our jobs or roles change — and not always for the better. Our organizations undergo reorgs and revamp their strategies, and we need to adjust.
Fortunately, there are ways to adapt to change, and even to take advantage of it.
Find the humor in the situation. Trying to find a funny moment during an otherwise unfunny situation can be a fantastic way to create the levity needed to see a vexing problem from a new perspective. It can help others feel better as well.
Pioneering humor researcher Rod A. Martin, who has studied the effects of different styles of humor, has found that witty banter, or “affiliative humor,” can lighten the mood and improve social interaction. Just make sure it’s inclusive and respectful. A good rule of thumb is that other people’s strife is no laughing matter, but your own struggles can be a source of comedic gold.
Talk about problems more than feelings. One of the most common myths of coping with unwanted changes is the idea that we can “work through” our anger, fears, and frustrations by talking about them a lot. This isn’t always the case. In fact, research shows that actively and repeatedly broadcasting negative emotions hinders our natural adaptation processes.
That’s not to say you should just “suck it up” or ignore your troubles. Instead, call out your anxiety or your anger at the outset of a disorienting change so that you are aware of how it might distort your thinking or disrupt your relationships. Then look for practical advice about what to do next. By doing so, you’ll zero in on the problems you can solve, instead of lamenting the ones you can’t.
Don’t stress out about stressing out. Our beliefs about stress matter. As Stanford psychologist Kelly McGonigal argues in The Upside of Stress, your reaction to stress has a greater impact on your health and success than the stress itself. If you believe stress kills you, it will. If you believe stress is trying to carry you over a big obstacle or through a challenging situation, you’ll become more resilient and may even live longer.
When you start to feel stressed, ask yourself what your stress is trying to help you accomplish. Is stress trying to help you excel at an important task, like a sales presentation or a big interview? Is it trying to help you endure a period of tough market conditions or a temporary shift in your organizational structure? Is it trying to help you empathize with a colleague or a customer? Or is stress to trying to help you successfully exit a toxic situation?
Stress can be a good thing — if you choose to see it that way.
Focus on your values instead of your fears. Reminding ourselves of what’s important to us — family, friends, religious convictions, scientific achievement, great music, creative expression, and so on — can create a surprisingly powerful buffer against whatever troubles may be ailing us.
In a series of studies spanning more than a decade, researchers led by Geoffrey Cohen and David Sherman have shown how people of all ages in a range of circumstances, from new schools and new relationships to new jobs, can strengthen their minds with a simple exercise: spending 10 minutes writing about a time when a particular value you hold has positively affected you.
The technique works because reflecting on a personal value helps us rise above the immediate threat, and makes us realize that our personal identity can’t be compromised by one challenging situation.
Accept the past, but fight for the future. Even though we are never free from change, we are always free to decide how we respond to it.
Viktor Frankl championed this idea after returning home from three horrific years in Nazi death camps. He discovered that his mother, brother, wife, and unborn child were all dead. Everything in his life had changed. All that he loved was lost. But as fall became winter and winter gave way to spring, Frankl began to discover that even though he could never go back to the life he once had, he was still free to meet new friends, find new love, become a father again, work with new patients, enjoy music, and read books. Frankl called his hope in the face of despair “tragic optimism.”
Frankl’s story is an extreme example, of course, but that’s all the more reason why we should find inspiration from it. If we fixate on the limitations of a specific change, we inevitably succumb to worry, bitterness, and despair.
Instead, we should choose to accept the fact that change happens, and employ our freedom to decide what to do next.
Don’t expect stability. In the late 1970s a researcher at the University of Chicago named Salvatore Maddi began studying employees at Illinois Bell. Soon after, the phone industry was deregulated, and the company had to undergo a lot of changes. Some managers had trouble coping. Others thrived. What separated the two groups?
The adaptive leaders chose to view all changes, whether wanted or unwanted, as an expected part of the human experience, rather than as a tragic anomaly that victimizes unlucky people. Instead of feeling personally attacked by ignorant leaders, evil lawmakers, or an unfair universe, they remained engaged in their work and spotted opportunities to fix long-standing problems with customer service and to tweak antiquated pricing structures.
In contrast, Maddi found that the struggling leaders were consumed by thoughts of “the good old days.” They spent their energy trying to figure out why their luck had suddenly turned sour. They tried to bounce back to a time and a place that no longer existed.
Although each of these six techniques requires different skills to pull off — and you’ll probably gravitate toward some more than others — there’s one thing that you must do if you want to be more successful at dealing with change: accept it.
After having spent several years tinkering with the Defense Department’s acquisition rules, Congress is turning its attention to one of the other main factors that bogs down the DoD procurement system: The byzantine apparatus the Pentagon and lawmakers use to actually fund each military program.
In the crosshairs is what’s known as the Planning, Programming, Budgeting and Execution (PPBE) process, an early Cold War-era construct that, translated to the modern era, means Defense officials usually wait at least two years after they realize they need a new technology before money arrives to start solving the problem.
The 2022 Defense authorization bill President Joe Biden signed this weekcontains two separate provisions aimed at tackling the PPBE predicament. One sets up an expert commission to take the current process apart and come up with alternatives. A second orders DoD itself to create a plan to consolidate all of the IT systems it uses to plan and execute its budget.
The current stringent, timeline-focused process has its roots in the late 1950s and early 1960s, when Defense reformers, including former Secretary Robert McNamara, were centralizing the Pentagon’s control over the military services and applying then-modern management techniques to run the world’s biggest bureaucracy.
“And one of the relics of those days gone by is the current DoD budget process,” Sen. Jack Reed, the chairman of the Senate Armed Services Committee said at a hearing earlier this year. “It was a product of McNamara, the Whiz Kids, and I can assure you those Whiz Kids are not kids anymore. It is 70 years.”
Under the new law, the new commission will start to take shape in February, and will have until September 2023 to deliver a final report to Congress and the Defense secretary. Its 14 members will be chosen by Defense Secretary Lloyd Austin, top House and Senate leaders, and the chairmen and ranking members of each of the congressional defense committees.
The long time horizon may have something to do with the fact that the commission’s task is massive. Its members, each of whom must be experts on budgeting or management, are being told to examine DoD’s current budgeting processes from top to bottom, and then compare them against other federal agencies, private sector companies, and other countries.
And it’s not just the process of drawing up budgets the commission will concern itself with. After all, that’s only the “B” in PPBE. It’ll also be tasked with scrutinizing the scrupulous, bureaucratic steps that start long before each year’s budget proposal, including the defense planning guidance and program objective memoranda (POMs) that eventually work their way into dollar figures for DoD planners and lawmakers to make decisions about how much to spend on each program.
Defense experts both in and outside of the Pentagon tend to believe the rule-intensive complexity involved in planning and requesting funding for DoD programs is one of the biggest impediments to speedy procurements, perhaps even more than the government’s acquisition rules themselves.
“Let’s say you find a great prototype someplace and you want to buy it. Well, did you have the foresight two years ago to plan it into your POM? If you didn’t, guess what? You have no authority to buy it,” Heidi Shyu, the undersecretary of Defense for Research and Engineering said in a recent interviewwith Federal News Network. “And let’s say you’re going to plan it into your POM. Well, in two years time, maybe you’ll get the money, but the technology is already several years old. The PPBE process is too sequential, too linear, too old-fashioned. It works really well if you’re moving at a very slow, very methodical, very risk-averse pace. But in today’s world, when competition against your adversaries is key, it’s got to change.”
And by some estimates, getting a new technology or product funded under the current system within two years is actually pretty optimistic.
In a February research paper, co-authors Bill Greenwalt and Dan Patt concluded the “best case” scenario under the PPBE process is actually more like seven years. That “best case” assumes DoD is using new, faster acquisition techniques like other transaction authority or middle-tier acquisition, getting money on contract almost as soon as it receives it, and getting Congressional appropriations on time instead of operating under a continuing resolution.
Under those circumstances, PPBE processes — not acquisition rules — are actually the “pacing element(s)” in DoD technology development, they wrote.
“The PPBE and accompanying appropriations process is the glue that holds the other elements of the process triad together and must be a priority for reevaluation,” according to the paper, published by the Hudson Institute. “These elements require that the DoD document any planned new capabilities, forecast milestones, and system performance years in advance. U.S. innovation time is lengthening in large part because of delays before production starts: In conceptualization, requirements, planning, and acquisition processes, and are driven in large part by the structure of the U.S. resource allocation process. Understanding these processes provides the motivation for reform.”
Officially, the new panel will be called the Commission on Planning, Programming, Budgeting, and Execution but it’s customary to name Congressionally-chartered expert groups after the section of the NDAA that created them, which would make this one the “Section 1004 Panel.”
A similarly-structured commission, theSection 809 Panel, which Congress created in 2015 to examine DoD’s acquisition regulations, also concluded that lawmakers wouldn’t achieve the streamlining they were looking for until they reformed the PPBE process. They reached that conclusion even after making 98 recommendations to Congress that focused mainly on procurement policies, dozens of which have been implemented.
One of the panel’s key diagnoses was that the three pieces of what’s often called the “Big A” acquisition system, meaning DoD’s requirements process, its budgeting mechanisms and its procurement bureaucracy, are actually three separate systems with completely different chains of command.
“Vague lines of authority and accountability result in a lack of transparency and access to accurate data. Stovepiped objectives fragment the system. Strategic objectives and investment decisions are misaligned, with execution driven by impractical timelines, strained personnel resources, and inadequate time for planning and debate,” the commission wrote in the second volume of its 2,000-page report to Congress. “To achieve its tactical and strategic goals, DoD needs to get the PPBE system right.”
Article link: https://www.redhat.com/en/open-source-stories
Dec 16, 2021
It’s about APIs; we’ll get to that shortly.
First there were containers
Now Docker is about containers: running complex software with a simple
docker run postgrescommand was a revelation to software developers in 2013, unlocking agile infrastructure that they’d never known. And happily, as developers adopted containers as a standard build and run target, the industry realized that the same encapsulation fits nicely for workloads to be scheduled in compute clusters by orchestrators like Kubernetes and Apache Mesos. Containers have become the most important workload type managed by these schedulers, but as the title says that’s not what’s most valuable about Kubernetes.
Kubernetes is not about more general workload scheduling either (sorry Krustlet fans). While scheduling various workloads efficiently is an important value Kubernetes provides, it’s not the reason for its success.
Then there were APIs
Rather, the attribute of Kubernetes that’s made it so successful and valuable is that it provides a set of standard programming interfaces for writing and using software-defined infrastructure services. Kubernetes provides specifications and implementations – a complete framework – for designing, implementing, operating and using infrastructure services of all shapes and sizes based on the same core structures and semantics: typed resources watched and reconciled by controllers.
To elaborate, consider what preceded Kubernetes: a hodge-podge of hosted “cloud” services with different APIs, descriptor formats, and semantic patterns. We’d piece together compute instances, block storage, virtual networks and object stores in one cloud; and in another we’d create the same using entirely different structures and APIs. Tools like Terraform came along and offered a common format across providers, but the original structures and semantics remained as variegated as ever – a Terraform descriptor targeting AWS stands no chance in Azure!
Now consider what Kubernetes provided from its earliest releases: standard APIs for describing compute requirements as pods and containers; virtual networking as services and eventually ingresses; persistent storage as volumes; and even workload identities as attestable service accounts. These formats and APIs work smoothly within Kubernetes distributions running everywhere, from public clouds to private datacenters. Internally, each provider maps the Kubernetes structures and semantics to that hodge-podge of native APIs mentioned in the previous paragraph.
Kubernetes offers a standard interface for managing software-defined infrastructure – cloud, in other words. Kubernetes is a standard API framework for cloud services.
And then there were more APIs
Providing a fixed set of standard structures and semantics is the foundation of Kubernetes’ success. Following on this, its next act is to extend that structure to any and allinfrastructure resources. Custom Resource Definitions (CRDs) were introduced in version 1.7 to allow other types of services to reuse Kubernetes’ programming framework. CRDs make it possible to request not only predefined compute, storage and network services from the Kubernetes API, but also databases, task runners, message buses, digital certificates, and whatever else a provider can imagine!
As providers have sought to offer their services via the Kubernetes API as custom resources, the Operator Framework and related projects from SIG API Machinery have emerged to provide tools and guidance that minimize work required and maximize standardization across all these shiny new resource types. Projects like Crossplane have formed to map other provider resources like RDS databases and SQS queues into the Kubernetes API just like network interfaces and disks are handled by core Kubernetes controllers today. And Kubernetes distributors like Google and Red Hat are providing more and more custom resource types in their base Kubernetes distributions.
All of this isn’t to say that the Kubernetes API framework is perfect. Rather it’s to say that it doesn’t matter(much) because the Kubernetes model has become a de facto standard. Many developers understand it, many tools speak it, and many providers use it. Even with warts, Kubernetes’ broad adoption, user awareness and interoperability mostly outweigh other considerations.
With the spread of the Kubernetes resource model it’s already possible to describe an entire software-defined computing environment as a collection of Kubernetes resources. Like running a single artifact with
docker run ..., distributed applications can be deployed and run with a simple
kubectl apply -f .... And unlike the custom formats and tools offered by individual cloud service providers, the Kubernetes’ descriptors are much more likely to run in many different provider and datacenter environments, because they all implement the same APIs.
Kubernetes isn’t about containers after all. It’s about APIs.
Quantum effects in superconductors could give semiconductor technology a new twist. Researchers at the Paul Scherrer Institute PSI and Cornell University in New York State have identified a composite material that could integrate quantum devices into semiconductor technology, making electronic components significantly more powerful. They publish their findings today in the journal Science Advances.
Our current electronic infrastructure is based primarily on semiconductors. This class of materials emerged around the middle of the 20th century and has been improving ever since. Currently, the most important challenges in semiconductor electronics include further improvements that would increase the bandwidth of data transmission, energy efficiency and information security. Exploiting quantum effects is likely to be a breakthrough.
Quantum effects that can occur in superconducting materials are particularly worthy of consideration. Superconductors are materials in which the electrical resistance disappears when they are cooled below a certain temperature. The fact that quantum effects in superconductors can be utilized has already been demonstrated in first quantum computers.
To find possible successors for today’s semiconductor electronics, some researchers—including a group at Cornell University—are investigating so-called heterojunctions, i.e. structures made of two different types of materials. More specifically, they are looking at layered systems of superconducting and semiconducting materials. “It has been known for some time that you have to select materials with very similar crystal structures for this, so that there is no tension in the crystal lattice at the contact surface,” explains John Wright, who produced the heterojunctions for the new study at Cornell University.
Two suitable materials in this respect are the superconductor niobium nitride (NbN) and the semiconductor gallium nitride (GaN). The latter already plays an important role in semiconductor electronics and is therefore well researched. Until now, however, it was unclear exactly how the electrons behave at the contact interface of these two materials—and whether it is possible that the electrons from the semiconductor interfere with the superconductivity and thus obliterate the quantum effects.
“When I came across the research of the group at Cornell, I knew: here at PSI we can find the answer to this fundamental question with our spectroscopic methods at the ADRESS beamline,” explains Vladimir Strocov, researcher at the Synchrotron Light Source SLS at PSI.
This is how the two groups came to collaborate. In their experiments, they eventually found that the electrons in both materials “keep to themselves.” No unwanted interaction that could potentially spoil the quantum effects takes place.
Synchrotron light reveals the electronic structures
The PSI researchers used a method well-established at the ADRESS beamline of the SLS: angle-resolved photoelectron spectroscopy using soft X-rays—or SX-ARPES for short. “With this method, we can visualize the collective motion of the electrons in the material,” explains Tianlun Yu, a postdoctoral researcher in Vladimir Strocov’s team, who carried out the measurements on the NbN/GaN heterostructure. Together with Wright, Yu is the first author of the new publication.
The SX-ARPES method provides a kind of map whose spatial coordinates show the energy of the electrons in one direction and something like their velocity in the other; more precisely, their momentum. “In this representation, the electronic states show up as bright bands in the map,” Yu explains. The crucial research result: at the material boundary between the niobium nitride NbN and the gallium nitride GaN, the respective “bands” are clearly separated from each other. This tells the researchers that the electrons remain in their original material and do not interact with the electrons in the neighboring material.
“The most important conclusion for us is that the superconductivity in the niobium nitride remains undisturbed, even if this is placed atom by atom to match a layer of gallium nitride,” says Vladimir Strocov. “With this, we were able to provide another piece of the puzzle that confirms: This layer system could actually lend itself to a new form of semiconductor electronics that embeds and exploits the quantum effects that happen in superconductors.”
More information: Tianlun Yu et al, Momentum-resolved electronic band structure and offsets in an epitaxial NbN/GaN superconductor/semiconductor heterojunction, Science Advances (2021). DOI: 10.1126/sciadv.abi5833. www.science.org/doi/10.1126/sciadv.abi5833
Journal information: Science Advances
Provided by Paul Scherrer Institute
For Unto Us a Child is Born