healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

What It Takes to Lead Through an Era of Exponential Change – HBR

Posted by timmreardon on 11/01/2020
Posted in: Uncategorized. Leave a comment

by Aneel Chima and Ron Gutman
October 29, 2020

To say that 2020 is a year of disruption and change is to understate the obvious. Our daily lives, from educating our kids, managing our health, and working from home, to simple social rituals like dinner with friends, underwent rapid multi-dimensional change. Nascent trends — virtualization of the workspace, online learning, virtual health, and e-commerce — accelerated exponentially. Changes anticipated to take years occurred in months and, in some cases, weeks and even days. Understandably, leaders have struggled mightily to address these overlapping changes simultaneously, dealing with economic, health, and logistical crises that have unfolded at top speed.

Much as we might like to think of 2020 as an anomaly, it may not be. Conditions for accelerating change have been building for years. Advancements in information technology, automation, human interconnectivity, Artificial Intelligence, and the network effects among them, created a new reality where change is much more rapid, continual, and ubiquitous. Covid-19 and its derivatives laid bare a “new normal” of change, marked by three dimensions:

  • It’s perpetual — occurring all the time in an ongoing way.
  • It’s pervasive — unfolding in multiple areas of life at once.
  • It’s exponential — accelerating at an increasingly rapid rate.

This three-dimensional (3-D) change is defining our emerging future and, as a consequence, effective leadership will be defined by the ability to navigate this new reality.

The problem is, our models for leadership weren’t built for this kind of 3-D change. Human minds evolved for thinking linearly and locally in the face of challenge, not exponentially and systemically. Noted futurist Ray Kurzweil asserted, “The future is widely misunderstood. Our forebears expected it to be pretty much like their present, which had been pretty much like their past.” But, projecting our pasts onto our futures exposes a fundamental error: Linear thinking can never catch-up and adapt to the perpetual, pervasive, and exponential change occurring around us — it’s simply too fast and too complex.

We need a new form of leadership, better equipped to navigate this unprecedented kind of change. For this purpose, we gathered, under the Stanford University umbrella, world-class luminaries — leaders who generate impact and change at a global scale — for conversations on the future of leadership and change-making. What emerged was a new vision of leadership, which we call Sapient Leadership. A Sapient Leader is characterized by being wise, sagacious, and discerning in navigating change while also being humane in the face of change that can often feel alien. This kind of leadership emphasizes — counterintuitively — an anti-heroic leader. Sapient Leaders exhibit authenticity, humility, and vulnerability, inspiring the necessary trust and psychological safety that drives shared learning and intelligence, resulting in enhanced collective performance and leading to a better future for all.

Limits of Linear Thinking in an Era of 3-D Change

In a world that’s relatively stable and mostly predictable, where change is incremental, punctuated by relatively few bursts of large change — what’s often called “disruption” — a model of leadership that relies on linear, local thinking can be useful. Much of the leadership literature focuses on the qualities, skills, abilities of the leader as an individual, and the linear and local maps they use to navigate the world. However, 3-D change presents a “high seas” environment where the leader navigates multiple domains — the waves and ever-evolving weather — of change simultaneously. In this environment, linear and local thinking can never adapt fast enough, leaving us increasingly ill-equipped to manage our rapidly changing business and work environments, our physical and mental health and well-being, and the major trends that shape our societies and cultures.

Change, by its nature, leaves people and organizations feeling confused, vulnerable, and fractured at a time when resilience, cohesion, and collaboration are necessary to perform at the highest levels. An emerging body of literature points to psychological safety, shared purpose, and distributed cognition as powerful drivers of leadership, team, and organizational performance, particularly in rapidly changing environments. The days of “leader as hero” — the solo, individualistic leader who inspires certainty in a deterministic way forward — are over. This evolution in how we think about change and leadership has only accelerated in the past year.

Fortuitously, our spring course at Stanford University, LEAD 111 “Luminaries: Life Lessons from Leaders and Change-makers” became a study of how top tier leaders embodied this emerging approach to leadership. Finalized one week before the Covid-19 pandemic struck the west coast, our original plan was to create a new framework of leadership suitable for a time of disruption, accelerating change, and a highly polarized political and social environment, and we designed the course to engage leaders and change-makers in conversations across sectors, generations, and the political spectrum. We wanted to know how change-oriented leaders operate. As the pandemic unfolded, however, we expanded the course to create a new model of leadership. And recognizing that these questions were of immediate and broad interest, we invited more leaders within and beyond the Stanford community to weigh in on how they were navigating this 3-D change.

We engaged leaders across sectors to analyze — in real time — how they adapted: captains of industry, such as Doug McMillon, president and CEO of Walmart and chairman of the Business Roundtable; innovators in health care such as Toby Cosgrove, former CEO of the Cleveland Clinic, heart surgeon, and White House advisor; global social change-makers such as Halla Tómasdóttir, CEO of The B Team, investor, co-founder of Reykjavik University, and runner-up in Iceland’s 2016 presidential elections; leading-edge technologists and innovators such as Bret Taylor, president and COO of Salesforce, co-creator of Google Maps and the “Like” button, and board member of Twitter.

The essential question we had was this: If leadership is significantly defined by the ability to skillfully navigate 3-D change, what type of leadership is most effective for our emerging future, one defined by perpetual, pervasive, and exponential change? The answers that emerged formed the basis for Sapient Leadership.

How to Practice Sapient Leadership

The four pillars of Sapient Leadership emerged out of the discussions with our luminaries as they were navigating 3-D change in real-time — each leader, in some capacity, articulated a version of these ideas. Leader humility, authenticity, and openness instills trust and psychological safety. In turn, trust and psychological safety empower individuals and teams to perform at their highest capabilities. Additionally, continuously learning teams are essential for keeping pace with and effectively navigating 3-D change. Finally, shared purpose and common values enhance focus, cohesion, and resilience in the midst of 3-D change.

1. Leader humility, authenticity, and openness instills trust and psychological safety.

In times of uncertainty, leaders often posture themselves, maximizing perception of power and control. In contrast, Halla Tómasdóttir modeled authenticity and humility when she reflected on her challenges as a candidate during the Icelandic presidential election. She, along with many of our luminaries, openly questioned the traditional paradigm of a leader as an individualistic hero. Instead, she highlighted the need to build trust through openness, saying, “what this crisis has shown us is that the leadership style of ‘I know it all’ is not a good leadership style for this moment or any other challenge we are going to continue to face and need to deal with collectively, collaboratively, with compassion, and with care.”

In a world of 3-D change, leaders need to continuously evolve themselves in order for their organization to evolve and grow. Rather than bending the organization to the will of the leader, a leader must be willing to instead exhibit humility and flexibility and change according to what the organization and circumstances require. Tómasdóttir exemplified this notion in her personal philosophy: “leadership is not given to the few — it’s inside of all of us, and life is all about unleashing that leadership.” This leadership style, which engenders trust and psychological safety within teams and organizations, animates much of her work with the B Team members that she’s leading — Sir Richard Branson, Arianna Huffington, Ajay Banga, Mary Robinson, and Marc Benioff, among others.

Our other luminaries echoed Tómasdóttir’s message about Sapient Leadership in the context of 3-D change. Doug McMillon said: “I don’t run Walmart, I help lead Walmart” asserting that leadership of this sort needs to go beyond words. Leaders, he said, “have to live it. It has to be authentic. It has to be habitual.”

2. Trust and psychological safety empower individuals and teams.

3-D change amplifies our innate and evolved human tendencies to skew towards threat perception, anxiety, and divisiveness when experiencing stress and encountering ambiguity. As such, psychological safety is even more important during these times change. Individuals and the teams they comprise thrive in environments where trust and psychological safety are present. In a recent extensive study at Google, code-named Project Aristotle — for the maxim frequently attributed to him, “The whole is greater than the sum of the parts” — researchers found that the most important factor associated with the highest performing teams was psychological safety. When team members feel safe to be vulnerable in front of one and to take risks, they perform at their best.

A consistent theme running throughout conversations with all of our luminaries was the essential nature of empowering teams and individuals to perform at their highest capabilities, especially now. “Change is not a solo sport,” said Bret Taylor, president and COO of Salesforce. “All great change has been done by great teams, great communities, and great networks.” When recalling times of rapid change throughout his career — from the creation of Google Maps to inventing the “like” button to scaling rapidly worldwide during the early days of Facebook — Bret asserted the importance of leadership that motivates strong relationships, fluid communication, and a foundation of trust to driving exceptional team performance.

3. Continuously learning teams enable effective navigation of 3-D change.

In a world where change is perpetual, pervasive, and exponential, Sapient Leaders, their teams, and their organizations must continually learn, update mental-maps, deploy new tools, and course-correct based on the best ideas and practices. “If you want to make a change in something you have to get into it deep,” said Toby Cosgrove, describing his openness to learning transformative ideas from anywhere he could. When he was the CEO of the Cleveland Clinic he regularly immersed himself in contexts where he could learn a better way. “If I heard somebody was doing something someplace in the world, I would pick up my pencil and paper and I would go and watch them do it,” he said. “I traveled someplace, learned something, and tried to bring it back and incorporate it.” What he was doing as a leader was both modeling leadership as a process of continual learning so others would replicate in their way, as well as disseminating what he learned throughout the organization in order to improve on existing processes and innovate new ones.

In a world of 3-D change, no one person or organization can master all knowledge across all domains, no single person or organization can master enough skills in breadth, depth, or pace, to keep up. Instead, learning must be inspired by leadership, reinforced by culture, occur across a variety of domains, coordinated through the whole and shared openly and actionably to create the broader picture. The analogy here is to mosaic vision, or the compound eye, where thousands of specific receptor units, oriented in different directions, work in coordination to create a composite perspective with a very wide angle of view, continually updating in real time as the organism moves through time and space. Without data and input to synthesize into understanding and action, a team or organization will be perpetually impoverished. To keep pace with 3-D change, Sapient Leaders need to enhance the breadth, depth, and pace of learning in their organizations to meet the extent and velocity of change.

4. Shared purpose and values enhance focus, cohesion, and resilience during 3-D change.

Professor Bill Damon, our esteemed colleague at Stanford University and one of the world’s leading purpose researchers, defines purpose as a stable intention to accomplish something that is both personally meaningful and serves the world larger than the self. Purpose, necessarily informed by our values and arising from a sense of personal meaning, unites our inner world with our actions in the world around us in a unique and powerful way in service of a vision larger than ourselves.

In times of 3-D change, which by its nature amplifies uncertainty and ambiguity, shared purpose and values increase organizational focus, enhance team cohesion, and amplify personal and collective resilience. They can also powerfully mobilize large numbers of people to solve complex problems together.

Doug McMillon, CEO of Walmart and chairman of the Business Roundtable, recounted Walmart’s Five Guiding Principles, which provided the organization focus, resilience, and a basis for cohesive action during the early challenging stages of the pandemic.

  1. Start with the people: “Support our associates financial health, physical health, and emotional health and well-being. They are on the front line.”
  2. Focus on the fundamentals and first principles: “Serve our customers — we had to keep the food supply chain going to avoid chaos.”
  3. Make sure our own home is in order: “Managing the business through the crisis — making sure inventory is under control, making sure we have cash flow, etc.”
  4. Keep building for the future, not for the past: “Continue assertively into online e-commerce, grocery delivery, leverage what’s already been put into play that customers want.”
  5. We’re all in this together: “What can we do to help other people through this crisis that does good for this company and society?”

Doug recounted how these principles guided Walmart’s actions during the early turbulence of the Covid-19 pandemic. “We received a call from the White House with a request to open drive-through testing stations throughout the nation in Walmart parking lots,” he recalled. “Although we didn’t know exactly how to do it and didn’t have a way to charge for it, Walmart’s response was fully committed, rapid, at scale, and across distributed geographies. Walmart’s ethos during this time: ‘Don’t worry about the short-term financials. Go do what’s right and it will all eventually work out.’”

The shared purpose and values articulated in Walmart’s Five Guiding Principles allowed collective action that was focused, cohesive, and resilient by many people across multiple geographies in the early times of 3-D change. Focus and cohesion allowed rapid learning of new skills, it allowed decisiveness during uncertainty, and it promoted working together towards a shared goal bigger than any individual or the company. Further, resilience allowed the courage to try something new and execute quickly, without giving up, in the face of ongoing change and challenge.

The Future of Leadership

Along with the myriad challenges it brought, the singular realization of 2020 is that 3-D change is the new normal. Navigating perpetual, pervasive, and exponential change is the quintessential test of effective leadership in this era. Leaders, teams, and organizations that don’t skillfully navigate change will fail. Mastering this new reality requires fundamental enhancements to our collective capabilities. Sapient Leadership enables the creation of perpetual, pervasive, and exponential capacity building necessary for handling 3-D change effectively. In addition, our recent conversations with Sapient Leaders have uncovered new ways in which exponential and transformative technologies can further enhance and amplify human capabilities. This topic is the basis for a future article we are preparing.

The key of Sapient Leadership is that it fits into the long history of the evolution of our species. Sapient, in its definition, refers to the nature of humans — it is in our nature to adapt or risk perishing. The challenge of 3-D change is that it amplifies the pressures on leaders, teams, and organizations to evolve and adapt faster, or become irrelevant. Change that used to take place over years and decades is now taking place in weeks or days. We, as a species, have never confronted change of this magnitude or at this pace. Sapient Leadership is a framework that enables accelerated adaptation in a wise and humane way. It builds into its structure the imperative for leaders, teams, and organizations to continuously evolve in order to overcome the challenges of 3-D change. Sapient Leaders and their successful organizations change with change itself.

Article link: https://hbr.org/2020/10/what-it-takes-to-lead-through-an-era-of-exponential-change?

Aneel Chima, PhD, is the Director of Health and Human Performance and of the Stanford Flourishing Project. Outside of academia he is co-founder and managing partner of AT THE CORE, a consulting boutique specializing in facilitating transformative change through enhancing the emotional, social, and neurophysiological drivers of team and leadership thriving.


Ron Gutman is an inventor (HOPES Health Operating System, Dr. AI), a serial technology/healthcare entrepreneur (his companies have served more than 500 million users worldwide), an investor, an author (of the popular TED Book and talk on the Powers of Smiling, and other publications on innovation, technology, and leadership), and a Stanford lecturer.

AI Engineers Need to Think Beyond Engineering – HBR

Posted by timmreardon on 11/01/2020
Posted in: Uncategorized. Leave a comment

by Donald Martin, Jr. and Andrew Moore
October 28, 2020

Artificial Intelligence (AI) has become one of the biggest drivers of technological change, impacting industries and creating entirely new opportunities. From an engineering standpoint, AI is just a more advanced form of data engineering. Most good AI projects function more like muddy pickup trucks than spotless race cars — they are a workhorse technology that humbly makes a production line 5% safer or movie recommendations a little more on point. However, more so than many other technologies, it is very, very easy for a well-intentioned AI practitioner to inadvertently do harm when they set out to do good. AI has the power to amplify unfair biases, making innate biases exponentially more harmful.

As Google AI practitioners, we understand that how AI technology is developed and used will have a significant impact on society for many years to come. As such, it’s crucial to formulate best practices. This starts with the responsible development of the technology and mitigating any potential unfair bias which may exist, both of which require technologists to look more than one step ahead: not “Will this delivery automation save 15% on the delivery cost?” but “How will this change affect the cities where we operate and the people — at-risk populations in particular — who live there?”

This has to be done the old-fashioned way: by human data scientists understanding the process that generates the variables that end up in datasets and models. What’s more, that understanding can only be achieved in partnership with the people represented by and impacted by these variables — community members and stakeholders, such as experts who understand the complex systems that AI will ultimately interact with.

Faulty causal assumptions can lead to unfair bias.

How do we actually implement this goal of building fairness into these new technologies — especially when they often work in ways we might not expect? As a first step, computer scientists need to do more to understand the contexts in which their technologies are being developed and deployed.

Despite our advances in measuring and detecting unfair bias, causation mistakes can still lead to harmful outcomes for marginalized communities. What’s a causation mistake? Take, for example, the observation during the Middle Ages that sick people attracted fewer lice, which led to an assumption that lice were good for you. In actual fact, lice don’t like living on people with fevers. Causation mistakes like this, where a correlation is wrongly thought to signal a cause and effect, can be extremely harmful in high-stakes domains such as health care and criminal justice. AI system developers — who usually do not have social science backgrounds — typically do not understand the underlying societal systems and structures that generate the problems their systems are intended to solve. This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes.

For instance, the researchers who discovered that a medical algorithm widely used in the U.S. health care was racially biased against Black patients identified that the root cause was the mistaken causal assumption, made by the algorithm designers, that people with more complex health needs will have spent more money on health care. This assumption ignores critical factors — such as lack of trust in the health care system and lack of access to affordable health care — that tend to decrease spending on health care by Black patients regardless of the complexity of their health care needs.

Researchers make this kind of causation/correlation mistake all the time. But things are worse for a deep learning computer, which searches billions of possible correlations in order to find the most accurate way to predict data, and thus has billions of opportunities to make causal mistakes. Complicating the issue further, it is very hard, even with modern tools, such as Shapely analysis, to understand why such a mistake was made — a human data scientist sitting in a lab with their supercomputer can never deduce from the data itself what the causation mistakes may be. This is why, among scientists, it is never acceptable to claim to have found a causal relationship in nature just by passively looking at data. You must formulate the hypothesis and then conduct an experiment in order to tease out the causation.

Addressing these causal mistakes requires taking a step back. Computer scientists need to do more to understand and account for the underlying societal contexts in which these technologies are developed and deployed.

Here at Google, we started to lay the foundations for what this approach might look like. In a recent paper co-written by DeepMind, Google AI, and our Trust & Safety team, we argue that considering these societal contexts requires embracing the fact that they are dynamic, complex, non-linear, adaptive systems governed by hard-to-see feedback mechanisms. We all participate in these systems, but no individual person or algorithm can see them in their entirety or fully understand them. So, to account for these inevitable blindspots and innovate responsibly, technologists must collaborate with stakeholders — representatives from sociology, behavioral science, and the humanities, as well as from vulnerable communities — to form a shared hypothesis of how they work. This process should happen at the earliest stages of product development — even before product design starts — and be done in full partnership with communities most vulnerable to algorithmic bias.

This participatory approach to understanding complex social systems — called community-based system dynamics (CBSD) — requires building new networks to bring these stakeholders into the process. CBSD is grounded in systems thinking and incorporates rigorous qualitative and quantitative methods for collaboratively describing and understanding complex problem domains, and we’ve identified it as a promising practice in our research. Building the capacity to partner with communities in fair and ethical ways that provide benefits to all participants needs to be a top priority. It won’t be easy. But the societal insights gained from a deep understanding of the problems that matter most to the most vulnerable in society can lead to technological innovations that are safer and more beneficial for everyone.

Shifting from a mindset of “building because we can” to “building what we should.”

When communities are underrepresented in the product development design process, they are underserved by the products that result. Right now, we’re designing what the future of AI will look like. Will it be inclusive and equitable? Or will it reflect the most unfair and unjust elements of our society? The more just option isn’t a foregone conclusion — we have to work towards it. Our vision for the technology is one where a full range of perspectives, experiences and structural inequities are accounted for. We work to seek out and include these perspectives in a range of ways, including human rights diligence processes, research sprints, direct input from vulnerable communities and organizations focused on inclusion, diversity, and equity such as WiML (Women in ML) and Latinx in AI; many of these organizations are also co-founded and co-led by Googler researchers, such as Black in AI and Queer in AI.

If we, as a field, want this technology to live up to our ideals, then we need to change how we think about what we’re building — to shift to our mindset from “building because we can” to “building what we should.” This means fundamentally shifting our focus to understanding deep problems and working to ethically partner and collaborate with marginalized communities. This will give us a more reliable view of both the data that fuels our algorithms and the problems we seek to solve. This deeper understanding could allow organizations in every sector to unlock new possibilities of what they have to offer while being inclusive, equitable and socially beneficial.

Article link: https://hbr.org/2020/10/ai-engineers-need-to-think-beyond-engineering?

Donald Martin, Jr. is Sr. Staff Technical Program Manager and Social Impact Technology Strategist at Google.


Andrew Moore is Head of Google Cloud AI & Industry Solutions.

The Future of Jobs Report 2020 – World Economic Forum

Posted by timmreardon on 10/28/2020
Posted in: Uncategorized. Leave a comment

Download PDF

The Future of Jobs report maps the jobs and skills of the future, tracking the pace of change. It aims to shed light on the pandemic-related disruptions in 2020, contextualized within a longer history of economic cycles and the expected outlook for technology adoption, jobs and skills in the next five years.

Online Report link: https://www.weforum.org/reports/the-future-of-jobs-report-2020#report-nav

When Does Predictive Technology Become Unethical? – HBR

Posted by timmreardon on 10/28/2020
Posted in: Uncategorized. Leave a comment

by Eric Siegel

October 23, 2020

Machine learning can ascertain a lot about you — including some of your most sensitive information. For instance, it can predict your sexual orientation, whether you’re pregnant, whether you’ll quit your job, and whether you’re likely to die soon. Researchers can predict race based on Facebook likes, and officials in China use facial recognition to identify and track the Uighurs, a minority ethnic group.

Now, do the machines actually “know” these things about you, or are they only making informed guesses? And, if they’re making an inference about you, just the same as any human you know might do, is there really anything wrong with them being so astute?

Let’s look at a few cases:

In the U.S., the story of Target predicting who’s pregnant is probably the most famous example of an algorithm making sensitive inferences about people. In 2012, a New York Times story about how companies can leverage their data included an anecdote about a father learning that his teenage daughter was pregnant due to Target sending her coupons for baby items in an apparent act of premonition. Although the story about the teenager may be apocryphal — even if it did happen, it would most likely have been coincidence, not predictive analytics that was responsible for the coupons, according to Target’s process detailed by The New York Times story — there is a real risk to privacy in light of this predictive project. After all, if a company’s marketing department predicts who’s pregnant, they’ve ascertained medically sensitive, unvolunteered data that only healthcare staff are normally trained to appropriately handle and safeguard.

INSIGHT CENTER

  • AI and Equality Designing systems that are fair for all.

Mismanaged access to this kind of information can have huge implications on someone’s life. As one concerned citizen posted online, imagine that a pregnant woman’s “job is shaky, and [her] state disability isn’t set up right yet…to have disclosure could risk the retail cost of a birth (approximately $20,000), disability payments during time off (approximately $10,000 to $50,000), and even her job.”

This isn’t a case of mishandling, leaking, or stealing data. Rather, it is the generation of new data — the indirect discovery of unvolunteered truths about people. Organizations can predict these powerful insights from existing innocuous data, as if creating them out of thin air.

So are we ironically facing a downside when predictive models perform too well? We know there’s a cost when models predict incorrectly, but is there also a cost when they predict correctly?

Even if the model isn’t highly accurate, per se, it may still be confident in its predictions for a certain group of pregnant individuals. Let’s say that 2% of the female customers between age 18 and 40 are pregnant. If the model identifies customers, say, three times more likely than average to be pregnant, only 6% of those identified will actually be pregnant. That’s a lift of three. But if you look at a much smaller, focused group, say the top 0.1% likely to be pregnant, you may have a much higher lift of, say, 46, which would make women in that group 92% likely to be pregnant. In that case, the system would be capable of revealing those women as very likely to be pregnant.

The same concept applies when predicting sexual orientation, race, health status, location, and your intentions to leave your job. Even if a model isn’t highly accurate in general, it can still reveal with high confidence — for a limited group — things like sexual orientation, race, or ethnicity. This is because, typically, there is a small portion of the population for whom it is easier to predict. Now, it may only be able to predict confidently for a relatively small group, but even just the top 0.1% of a population of a million would mean 1,000 individuals have been confidently identified.

It’s easy to think of reasons why people wouldn’t want someone to know these things. As of 2013, Hewlett-Packard was predictively scoring its more than 300,000 workers with the probability of whether they’d quit their job — HP called this the Flight Risk score, and it was delivered to managers. If you’re planning to leave, your boss would probably be the last person you’d want to find out before it’s official.

As another example, facial recognition technologies can serve as a way to track location, decreasing the fundamental freedom to move about without disclosure, since, for example, publicly-positioned security cameras can identify people at specific times and places. I certainly don’t sweepingly condemn face recognition, but know that CEO’s at both Microsoft and Google have come down on it for this reason.

In yet another example, a consulting firm was modeling employee loss for an HR department, and noticed that they could actually model employee deaths, since that’s one way you lose an employee. The HR folks responded with, “Don’t show us!” They didn’t want the liability of potentially knowing which employees were at risk of dying soon.

Research has shown that predictive models can also discern other personal attributes — such as race and ethnicity — based on, for example, Facebook likes. A concern here is the ways in which marketers may be making use of these sorts of predictions. As Harvard professor of government and technology Latanya Sweeney put it, “At the end of the day, online advertising is about discrimination. You don’t want mothers with newborns getting ads for fishing rods, and you don’t want fishermen getting ads for diapers. The question is when does that discrimination cross the line from targeting customers to negatively impacting an entire group of people?” Indeed, a study by Sweeney showed that Google searches for “black-sounding” names were 25% more likely to show an ad suggesting that the person had an arrest record, even if the advertiser had nobody with that name in their database of arrest records.

“If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity,” says Clare Garvie, senior associate at the Center on Privacy and Technology at Georgetown Law.

Which brings us to China, where the government applies facial recognition to identify and track members of the Uighurs, an ethnic group systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. This flagging of individuals by ethnic group is designed specifically to be used as a factor in discriminatory decisions — that is, decisions based at least in part on a protected class. In this case, members of this group, once identified, will be treated or considered differently on the basis of their ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize “sensitive groups of people.” Its website said, “If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms” to law enforcement.

Implementing the differential treatment of an ethic group based on predictive technology takes the risks to a whole new level. Jonathan Frankle, a deep learning researcher at MIT, warns that this potential extends beyond China. “I don’t think it’s overblown to treat this as an existential threat to democracy. Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion… To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

It’s a real challenge to draw the line as to which predictive objectives pursued with machine learning are unethical, let alone which should be legislated against, if any. But, at the very least, it’s important to stay vigilant for when machine learning serves to empower a preexisting unethical practice, and also for when it generates data that must be handled with care.

Article link: https://hbr.org/2020/10/when-does-predictive-technology-become-unethical?

DHA’s new Joint Operations Center serves as essential integration hub – Health.mil

Posted by timmreardon on 10/28/2020
Posted in: Uncategorized. Leave a comment

The Defense Health Agency relies on rapid information exchange to provide combat support to U.S. military forces stationed worldwide. With many changes taking place in military health care and the ongoing battle against the COVID-19 pandemic, integrated communication from the bottom to the top, and vice versa remains essential for service members, civilians, stakeholders and beneficiaries.

The Joint Operations Center (JOC) acts as a conduit for this information. As the information hub of DHA, the JOC collects requests and data from across the Military Health System, including the Joint Staff, combatant commands, and a variety of other military organizations. The JOC then provides that information to the proper authorities in DHA to take action.

“By ensuring that the JOC is getting current and accurate information, we can make sure that the DHA leadership provides our primary stakeholders with the best resources we can offer,” said Neil Doherty, head of the Operations Division at DHA.

DHA recently finished construction of a new facility for the JOC within DHA headquarters. Formerly operating from a series of reserved conference rooms, the JOC is now co-located in one space that improves its capabilities. The facility offers advanced screens to monitor and project information, the ability to host video calls with remote locations through VTC, and a secure space to hold meetings with high-ranking leaders.

Jerry Vignon, manager of DHA’s continuity program, emphasized the importance of the monitoring screens in the JOC’s day-to-day activities. The screens allow the JOC to display different types of information and data feeds, from open source material like news outlets and weather services to health surveillance information – all of which are vital to DHA leadership, he explained. This capability keeps the JOC up-to-date without having to relay information being ferries to and from various offices.

“If something does hit the airwaves, we are aware of it right away,” Vignon said. “It gives us more of a real-time connectivity to information that’s helpful for us as we’re responding to events around the world.”

While each of the services and combatant commands has a JOC of their own, DHA’s unique operations center brings all of them online together in one secure location to discuss issues pertinent to military health care. Air Force Col. Jennifer Garrison, deputy chief and deputy assistant director for Combatant Command Operational Support at DHA, shared how the JOC’s Joint Staff calls help connect the services in order provide the best possible healthcare for our warfighters and their families.

“You have Joint Staff, the combatant commands, and all the services on call at the same time,” she explained. “It’s a way for them to ensure that if there’s a concern like a pandemic or any contingency, that we have 24/7 coverage to keep information flowing to make critical decisions collectively with our stakeholders in a timely manner,” she said.

This collaboration has proven to be essential during the DHA’s support of the national emergency in response to the COVID-19 pandemic. The JOC stood up a crisis action team, or CAT, made up of subject matter experts who work together to share information on wide-impact, unplanned events like the pandemic. The team met regularly to funnel information about the novel coronavirus and COVID-19 throughout the Military Health System so that leaders at all levels could make necessary decisions for their commands to fight the pandemic.

“The JOC is the first line of defense when it comes to working with the Joint Staff, combatant commands, and services for support and knowing all the critical information to improve on readiness, safety, quality, and patient expectations across the Military Health System.” Garrison said. “So, if they need something for COVID-19, whether it is flu immunizations, blood, equipment, or supplies, that’s able to be communicated at this level.”

Now that they have the facility to support around the clock operations, Doherty said that the next step is to fully staff the center.

“The goal is to get staffed up to support 24/7 operations so that we are connected to all of the organizations in the Department of Defense across the globe,” Don Dahlheimer, deputy assistant director for Combatant Command Operational Support at DHA. “It’s a great opportunity for the agency to efficiently collaborate with our DoD, component, and interagency stakeholder, and across the DHA in support of our beneficiaries and warfighters.

“We want to ensure the best possible care from the battlefield to our military medical treatment facilities, and this JOC supports collaborative communications,” concluded Dahlheimer.

Article link: https://health.mil/News/Articles/2020/10/14/DHAs-new-Joint-Operations-Center-serves-as-essential-integration-hub

Examining VA’s Ongoing Efforts in the Electronic Health Record Modernization Program

Posted by timmreardon on 10/01/2020
Posted in: Uncategorized. Leave a comment

Explainer: What is quantum communication? – MIT Technology Review

Posted by timmreardon on 09/27/2020
Posted in: Uncategorized. Leave a comment

Researchers and companies are creating ultra-secure communication networks that could form the basis of a quantum internet. This is how it works.by 

  • Martin Gilesarchive page

Researchers and companies are creating ultra-secure communication networks that could form the basis of a quantum internet. This is how it works.

This is the second in a series of explainers on quantum technology. The other two are on quantum computing and post-quantum cryptography.

Barely a week goes by without reports of some new mega-hack that’s exposed huge amounts of sensitive information, from people’s credit card details and health records to companies’ valuable intellectual property. The threat posed by cyberattacks is forcing governments, militaries, and businesses to explore more secure ways of transmitting information.

Today, sensitive data is typically encrypted and then sent across fiber-optic cables and other channels together with the digital “keys” needed to decode the information. The data and the keys are sent as classical bits—a stream of electrical or optical pulses representing 1s and 0s. And that makes them vulnerable. Smart hackers can read and copy bits in transit without leaving a trace.

Quantum communication takes advantage of the laws of quantum physics to protect data. These laws allow particles—typically photons of light for transmitting data along optical cables—to take on a state of superposition, which means they can represent multiple combinations of 1 and 0 simultaneously. The particles are known as quantum bits, or qubits.

The beauty of qubits from a cybersecurity perspective is that if a hacker tries to observe them in transit, their super-fragile quantum state “collapses” to either 1 or 0. This means a hacker can’t tamper with the qubits without leaving behind a telltale sign of the activity.

Some companies have taken advantage of this property to create networks for transmitting highly sensitive data based on a process called quantum key distribution, or QKD. In theory, at least, these networks are ultra-secure.

What is quantum key distribution?

QKD involves sending encrypted data as classical bits over networks, while the keys to decrypt the information are encoded and transmitted in a quantum state using qubits.

Various approaches, or protocols, have been developed for implementing QKD. A widely used one known as BB84 works like this. Imagine two people, Alice and Bob. Alice wants to send data securely to Bob. To do so, she creates an encryption key in the form of qubits whose polarization states represent the individual bit values of the key.

The qubits can be sent to Bob through a fiber-optic cable. By comparing measurements of the state of a fraction of these qubits—a process known as “key sifting”—Alice and Bob can establish that they hold the same key.

As the qubits travel to their destination, the fragile quantum state of some of them will collapse because of decoherence. To account for this, Alice and Bob next run through a process known as “key distillation,” which involves calculating whether the error rate is high enough to suggest that a hacker has tried to intercept the key.

If it is, they ditch the suspect key and keep generating new ones until they are confident that they share a secure key. Alice can then use hers to encrypt data and send it in classical bits to Bob, who uses his key to decode the information.

We’re already starting to see more QKD networks emerge. The longest is in China, which boasts a 2,032-kilometer (1,263-mile) ground link between Beijing and Shanghai. Banks and other financial companies are already using it to transmit data. In the US, a startup called Quantum Xchange has struck a deal giving it access to 500 miles (805 kilometers) of fiber-optic cable running along the East Coast to create a QKD network. The initial leg will link Manhattan with New Jersey, where many banks have large data centers.

Although QKD is relatively secure, it would be even safer if it could count on quantum repeaters.

What is a quantum repeater?

Materials in cables can absorb photons, which means they can typically travel for no more than a few tens of kilometers. In a classical network, repeaters at various points along a cable are used to amplify the signal to compensate for this.

QKD networks have come up with a similar solution, creating “trusted nodes” at various points. The Beijing-to-Shanghai network has 32 of them, for instance. At these waystations, quantum keys are decrypted into bits and then reencrypted in a fresh quantum state for their journey to the next node. But this means trusted nodes can’t really be trusted: a hacker who breached the nodes’ security could copy the bits undetected and thus acquire a key, as could a company or government running the nodes.

Ideally, we need quantum repeaters, or waystations with quantum processors in them that would allow encryption keys to remain in quantum form as they are amplified and sent over long distances. Researchers have demonstrated it’s possible in principle to build such repeaters, but they haven’t yet been able to produce a working prototype.

There’s another issue with QKD. The underlying data is still transmitted as encrypted bits across conventional networks. This means a hacker who breached a network’s defenses could copy the bits undetected, and then use powerful computers to try to crack the key used to encrypt them.

The most powerful encryption algorithms are pretty robust, but the risk is big enough to spur some researchers to work on an alternative approach known as quantum teleportation.

What is quantum teleportation?

This may sound like science fiction, but it’s a real method that involves transmitting data wholly in quantum form. The approach relies on a quantum phenomenon known as entanglement.

Quantum teleportation works by creating pairs of entangled photons and then sending one of each pair to the sender of data and the other to a recipient. When Alice receives her entangled photon, she lets it interact with a “memory qubit” that holds the data she wants to transmit to Bob. This interaction changes the state of her photon, and because it is entangled with Bob’s, the interaction instantaneously changes the state of his photon too.

In effect, this “teleports” the data in Alice’s memory qubit from her photon to Bob’s. The graphic below lays out the process in a little more detail:

Researchers in the US, China, and Europe are racing to create teleportation networks capable of distributing entangled photons. But getting them to scale will be a massive scientific and engineering challenge. The many hurdles include finding reliable ways of churning out lots of linked photons on demand, and maintaining their entanglement over very long distances—something that quantum repeaters would make easier.

Still, these challenges haven’t stopped researchers from dreaming of a future quantum internet.

What is a quantum internet?

Just like the traditional internet, this would be a globe-spanning network of networks. The big difference is that the underlying communications networks would be quantum ones.

It isn’t going to replace the internet as we know it today. Cat photos, music videos, and a great deal of non-sensitive business information will still move around in the form of classical bits. But a quantum internet will appeal to organizations that need to keep particularly valuable data secure. It could also be an ideal way to connect information flowing between quantum computers, which are increasingly being made available through the computing cloud.

China is in the vanguard of the push toward a quantum internet. It launched a dedicated quantum communications satellite called Micius a few years ago, and in 2017 the satellite helped stage the world’s first intercontinental, QKD-secured video conference, between Beijing and Vienna. A ground station already links the satellite to the Beijing-to-Shanghai terrestrial network. China plans to launch more quantum satellites, and several cities in the country are laying plans for municipal QKD networks.

Some researchers have warned that even a fully quantum internet may ultimately become vulnerable to new attacks that are themselves quantum based. But faced with the hacking onslaught that plagues today’s internet, businesses, governments, and the military are going to keep exploring the tantalizing prospect of a more secure quantum alternative.

Article link: https://www.technologyreview.com/2019/02/14/103409/what-is-quantum-communications/?

Top 10 Strategic Technology Trends for 2020: Distributed Cloud – MIT Tech Review

Posted by timmreardon on 09/27/2020
Posted in: Uncategorized. 1 Comment

Published 10 March 2020 – ID G00450641 – 17 min read

Enterprises are advancing use cases of cloud computing in ways that deliver it at the point of need using distributed cloud. Enterprise architecture and technology innovation leaders must identify and exploit evolving models of cloud computing deployment to exploit business opportunities.

Overview

Key Findings

  • Distributed cloud is the first cloud model that incorporates physical location of cloud-delivered services as part of its definition.
  • Distributed cloud fixes discontinuities in the cloud value chain that often exist in hybrid cloud models. Cloud providers are following different approaches and models to solve such issues.
  • Distributed cloud will emerge in phases. In the first phase, enterprises will deploy and consume it as a packaged, location-bound, distributed cloud offering. In the second phase, third parties such as telcos and city governments will become involved.
  • New advanced use cases and more sophisticated uses of cloud computing are increasing the array of cloud services available to IT professionals. Each of these distributed cloud architectures offers a different set of trade-offs, often based on proximity, control, scalability and breadth of services available.

Recommendations

Enterprise architecture and technology innovation leaders assessing strategic technology trends for their impact and potential for competitive advantage must:

  • Use distributed cloud models as an opportunity to prepare for the next generation of cloud computing by targeting location-dependent use cases.
  • Overcome deficiencies in private and hybrid cloud implementations by using the like-for-like hybrid nature of distributed cloud.
  • Identify use cases for future phases of distributed cloud (such as low latency, tethered scale and data residency) that are enhanced by using distributed cloud “substations.”
  • Investigate making cloud providers responsible for cloud operations, even on-premises, to overcome the failures and shortcomings of today’s private and hybrid cloud computing.

Strategic Planning Assumption

By 2024, most cloud service platforms will provide at least some distributed cloud services that execute at the point of need.

Analysis

Why Distributed Cloud Is a Top 10 Trend

As more people are using cloud computing, they are using it for more advanced use cases. And vendors are delivering cloud capabilities in more nuanced and intelligent ways, recognizing new customer value in new business cases.Distributed cloud is the answer to the question “What is the future of cloud computing?” It refers to the distribution of public cloud services to different physical locations while the operation, governance and evolution of the services remain the responsibility of the public cloud provider. As with anything that describes the future, distributed cloud is based on origins visible today. The distributed cloud brings aspects of worldwide public cloud regions, hybrid cloud and edge computing to the original world of cloud computing (see Figure 1).Figure 1. Distributed Cloud

We have identified distributed cloud as a top 10 strategic technology trend for 2020 because of the importance of cloud computing itself. Cloud computing underpins virtually all the candidates for “the next big thing,” including the other top 10 strategic technology trends.

Where Distributed Cloud Fits in the Top 10

This trend is part of the smart spaces category (see Figure 2), along with empowered edge, autonomous things, practical blockchain and AI security.Figure 2. Where Distributed Cloud Fits in the Top 10 List of Strategic Technology Trends

Distributed cloud has much synergy with three of our other top 10 strategic technology trends. This goes beyond the basic “requires cloud to work” reality of these three trends:

  • Empowered edge. Edge devices will exploit distributed cloud systems located everywhere from adjacents to endpoints (for example, on gateways and on-premises microdata centers) through to remote cloud regions.
  • Practical blockchain. As blockchain matures, more processing will occur at the edge and elsewhere. However, many of these environments have restricted computing power, slow networking and limited data storage capabilities. They will increasingly rely on capabilities powered by distributed cloud.
  • AI security. It won’t be possible to monitor and manage the vast number of future edge devices manually. AI-based security systems will be essential to identify anomalous behavior by distributed capabilities.

Location is a key factor in the successful deployment and consumption of these three top 10 technologies. Distributed cloud will be a foundation on which the power of cloud can be delivered in the required locations to support the other top 10 technologies.

Distributed Cloud Explained

Distributed cloud’s distribution of public cloud services to different physical locations represents a significant shift from the virtually centralized model of most public cloud services and the model associated with the general cloud concept. It will lead to a new era in cloud computing.Gartner defines cloud computing as a style of computing in which elastically scalable IT-enabled capabilities are delivered as a service using internet technologies. This definition makes no mention of location. Cloud computing has long been viewed as synonymous with a “centralized” service running in the provider’s data center. However, it would be better to view it as a logically centralized or unified service. Private and hybrid cloud options complement this public cloud model. Private cloud refers to the creation of cloud services dedicated to individual companies often running in their own data centers. Hybrid cloud refers to the integration of private and public cloud services to support parallel, integrated or complementary tasks.Location is a key part of the distributed cloud concept. Distributed cloud distributes capabilities to different locations. Deploying cloud services in a distributed fashion provides stronger support for a continuum of cloud services from the central public cloud out to edge devices and scenarios. The ability to access cloud services running on edge devices enables the allocation of cloud resources to different use cases. It enables different connectivity requirements to be met for individual devices; nearby sites; and communities, cities, countries or entire regions. Distributed cloud can also address different physical security and ruggedness requirements. This continuum unifies cloud, edge and disconnected deployment use cases for cloud services, devices and data in a unified strategy.

Distributed Cloud Has Three Origins: Public Cloud Regions, Hybrid Cloud and Edge Computing

Public Cloud Regions

In hyperscale public cloud implementations, the public cloud is the “center of the universe.” However, cloud services have been distributed worldwide in the public cloud almost since its inception. Providers have different regions around the world, all centrally controlled, managed and provided by one public cloud provider.The location of the cloud services is a critical component of the distributed cloud computing model. Historically, location has not been relevant to cloud definitions, but issues related to it are important in many situations. Location may be important for a variety of reasons, including data sovereignty, and for latency-sensitive use cases. In these scenarios, the distributed cloud service provides organizations with the capabilities of a public cloud service delivered in a location that meets their requirements.

Hybrid Cloud

The aim of the hybrid cloud concept has been to blend external services from a provider and internal services running on-premises in an optimized, efficient and cost-effective manner.However, implementing a private cloud is hard. Hybrid cloud computing requires both public and private clouds. Most private cloud projects do not deliver the cloud outcomes and benefits organizations seek. Also, most of the conversations Gartner has with clients about hybrid cloud are not about true hybrid cloud scenarios. Instead, they are about hybrid IT scenarios in which noncloud technologies are used with public cloud services in a spectrum of cloud-like models. This is referred to as cloud-inspired (see “Four Types of Cloud Computing Describe a Spectrum of Cloud Value”). Hybrid IT and true hybrid cloud options are valid approaches, and we recommend them for some use cases. But most hybrid cloud styles break many of the cloud computing value propositions and fail to:

  • Shift the responsibility and work of running hardware and software infrastructure to cloud providers
  • Exploit the economics of cloud elasticity (scaling up and down) from a large pool of shared resources
  • Benefit from the pace of innovation in sync with the public cloud providers
  • Use the cost economics of global hyperscale services
  • Employ the skills of large cloud providers to secure and operate world-class services
The Packaging of Hybrid Cloud

The next generation of hybrid (and private) cloud is packaged and solves many of the problems with hybrid cloud. Packaged hybrid cloud refers to a vendor-provided private cloud offering that is packaged and connected to a public cloud in a tethered way. Two main approaches exist to packaged hybrid cloud: “like-for-like” hybrid and “layered technology” hybrid (spanning different technology bases).

  1. The like-for-like hybrid approach is typified by Microsoft Azure and Azure Stack. Azure Stack is not the same as Azure in the public cloud. It is a subset, but delivers a set of capabilities that mirror the services in the Azure public cloud. AWS Outposts, another example, can be used in a managed private cloud mode (where no other companies have access). It represents an example of the like-for-like approach. However, the broader strategy represented by AWS Outposts would encourage a more distributed model in which each Outposts deployment is opened to near neighbors. Like-for-like solutions provide the “full stack,” but not necessarily the hardware, all managed by a single vendor.
  • In the Azure Stack approach, the customer buys and owns a hardware platform. The cloud software layer is delivered with a subset of the provider’s public cloud services. In this scenario, the cloud provider does not usually take full responsibility for the ongoing operations, maintenance or updating of the underlying hardware platform. The cloud provider may have only partial responsibility for the software. Users are responsible, doing it themselves or using a managed service provider.
  • In the AWS Outposts model, a full appliance comprising both hardware and software is delivered to the customer. The cloud provider takes responsibility for supporting and maintaining the hardware and software. The customer provides the physical facility in which the system is hosted, but otherwise the cloud provider effectively runs the appliance as an extension of its central cloud service.
  • Although a software approach provides a like-for-like model between the public service and the on-premises implementation, the other challenges with the hybrid cloud remain. Some customers consider it an advantage that they control service updates.
  1. The layered technology hybrid approach is based on integration of different underlying technologies, platforms and capabilities — creating a portability layer of sorts. This is where Google and IBM (and others) have focused — Google with Anthos (formerly its cloud services platform) and IBM with Red Hat and OpenShift.
  • In this approach, the provider delivers a portability layer typically built on Kubernetes as the foundation for services across a distributed environment. In some cases, the portability layer simply uses containers to support execution of a containerized application. In other cases, the provider delivers some of its cloud services as containerized services that can run in the distributed environment. The portability approach ignores the ownership and management of the underlying hardware platform, which remains the responsibility of the customer.

Combined and other approaches exist. In these, the provider delivers a like-for-like version of some of its cloud services in a hardware/software combination, and the provider commits to managing and updating the service. This reduces the burden on the service consumer who can view the service as a “black box.” However, some customers will be uncomfortable giving up all control of the underlying hardware and software update cycles.

Distributed Cloud Delivers on the Hybrid Cloud Promise

The distributed cloud extends beyond cloud-provider-owned data centers (for example, the model in which cloud providers have different regions). In the distributed cloud, the originating public cloud provider is responsible for all aspects of cloud service architecture, delivery, operations, governance and updates. This restores cloud value propositions that are broken when customers are responsible for a part of the delivery, as is usually the case in hybrid cloud scenarios. The cloud provider does not need to own the hardware on which the distributed cloud service is installed. But in a full implementation of the distributed cloud model, the cloud provider must take full responsibility for how that hardware is managed and maintained.

Edge Computing

The fundamental notion of the distributed cloud is that the public cloud provider is responsible for the design, architecture, delivery, operation, maintenance, updates and ownership, often including the underlying hardware. However, as solutions move closer to the edge, it is often not desirable or feasible for the provider to own the entire stack of technology. As these services are distributed onto operational systems (for example, a power plant or wind farm), the consuming organization may not want to give up ownership and management of the physical plant to an outsider provider. But the consuming organization may be interested in a service that the provider delivers, manages and updates on such equipment. The same is true for mobile devices, smartphones and other client equipment. As a result, we expect a spectrum of delivery models will appear, with the provider accepting varying levels of ownership and responsibility.Another edge factor that will influence the distribution of public cloud services will be the capabilities of the edge, near-edge and far-edge platforms that may not need, or cannot run, a like-for-like service that mirrors that in the centralized cloud. Complementary services tailored to the target environment, such as a low-function Internet of Things (IoT) or storage device, will be part of the distributed cloud spectrum (for example, AWS IoT Greengrass, AWS Snowball and Azure Stack Edge). However, at a minimum, the cloud provider must design, architect, distribute, manage and update these services if they are to be viewed as part of the distributed cloud spectrum.The distributed cloud supports continuously connected and intermittently connected operation of like-for-like cloud services from the public cloud distributed to specific and varied locations. This enables low-latency service execution in which the cloud services are closer to the point of need in remote data centers or delivered all the way to the edge device itself. This can deliver major improvements in performance and reduce the risk of global network-related outages, as well as support occasionally connected scenarios. By 2024, most cloud service platforms will provide at least some services that execute at the point of need.

The Evolution of Distributed Cloud

Distributed Cloud Phases

We expect that distributed cloud computing will happen in four phases:

  • Phase 1. A like-for-like hybrid mode in which the cloud provider delivers services in a distributed fashion that mirror a subset of services in its centralized cloud for delivery in the enterprise.
  • Phase 2. An extension of the like-for-like model in which the cloud provider teams with third parties to deliver a subset of its centralized cloud services to target communities through the third-party provider. An example is the delivery of services through a telecommunications provider or colocation provider to support data sovereignty requirements in smaller countries where the provider has no data centers.
  • Phase 3. Communities of organizations share distributed cloud substations. We use the term “substations” to evoke the image of subsidiary stations (like branch post offices) where people gather to use services. Cloud customers can gather at a distributed cloud substation to consume cloud services for common or varied reasons if it is open for community or public use. This improves the economics associated with paying for the installation and operation of a distributed cloud substation. As other companies use the substation, they can share the cost of the installation. We expect that third parties such as telecommunications service providers will consider creating substations in locations where the public cloud provider lacks a presence. If the substation is not open for use outside the organization that paid for its installation, then the substation represents a private cloud instance in a hybrid relationship with the public cloud.
  • Phase 4. Use of embedded and personal resources. Examples include the use of local processing on personal devices, embedded capabilities in smart buildings and components embedded in software packages or applications.

Ironically, the distributed cloud takes something location-independent (cloud computing), introduces location importance and ultimately removes the concern about location. In its most complete form, a distributed cloud approach will enable an organization to specify its requirements (for example, compliance and security, budget, and capacity) to a cloud provider. The cloud provider will, increasingly in an automated way, generate the optimal configuration without requiring detailed location knowledge.In addition to addressing regional, hybrid and edge issues, distributed cloud approaches will enable additional scenarios. These include dedicated connected implementations for governments and industry-specific community clouds, and potentially for solutions that can address geopolitical needs. Such geopolitical issues are leading to increasing national concerns about connections to the main internet. These include censorship, security, privacy and data sovereignty. This “splintering” of the internet and cloud scenarios defies easy solutions — distributed cloud capabilities could help.

Paths to Distributed Cloud

The distributed cloud is in the early stages of development. Many providers aim to offer most of their public services in a distributed manner in the long term. But they currently provide only a subset — and often a small subset — of their services in a distributed way, and with limited consumption models (form factors). Some providers do not support the complete delivery, operation and update elements of a full distributed cloud. Providers are extending services to on-premises data centers, third-party data centers and the edge. They are doing so with offerings such as Microsoft Azure Stack, Oracle Cloud at Customer, Google Anthos, IBM Red Hat and AWS Outposts (and AWS Local Zones and AWS Wavelength).Evaluate the potential benefits and challenges of the like-for-like and layered technology packaging approaches. Each approach involves challenges in terms of fulfilling the vision of distributed cloud. The like-for-like approach tends to result in walled gardens. The layered approach can be subject to the challenges of delivering portable, open software. Both of these approaches could lead to an open, fully managed, multicloud solution but through different paths and with very different challenges.

Actions

Enterprise architecture and technology innovation leaders must:

  • Use distributed cloud models as an opportunity to prepare for the next generation of cloud computing by targeting location-dependent use cases.
  • Overcome deficiencies in private and hybrid cloud implementations by using the like-for-like hybrid nature of distributed cloud.
  • Identify use cases for future phases of distributed cloud (such as low latency, tethered scale and data residency) that are enhanced by using distributed cloud substations.
  • Identify scenarios where a distributed cloud model will remove the need for a “traditional” hybrid cloud model and where hybrid cloud models will continue to be needed for years.
  • Investigate making cloud providers responsible for cloud operations, even on-premises, to overcome the failures and shortcomings of today’s private and hybrid cloud computing.
  • Exploit the flexibility offered by the increased deployment options of cloud computing.

Appendix: The Other Top Strategic Technology Trends for 2020

For information on the other top strategic technology trends for 2020, see:“Top 10 Strategic Technology Trends for 2020: Hyperautomation”“Top 10 Strategic Technology Trends for 2020: Multiexperience”“Top 10 Strategic Technology Trends for 2020: Democratization”“Top 10 Strategic Technology Trends for 2020: Human Augmentation”“Top 10 Strategic Technology Trends for 2020: Transparency and Traceability”“Top 10 Strategic Technology Trends for 2020: Empowered Edge”“Top 10 Strategic Technology Trends for 2020: Autonomous Things”“Top 10 Strategic Technology Trends for 2020: Practical Blockchain”“Top 10 Strategic Technology Trends for 2020: AI Security”

By David Smith, David Cearley, Ed Anderson, Daryl Plummer

Article link: https://www.gartner.com/doc/reprints?id=1-1ZNKBOCG&ct=200811&st=sb

Kaiser Permanente launches ‘virtual-first’ health plan in Washington – Healthcare IT News

Posted by timmreardon on 09/15/2020
Posted in: Uncategorized. 1 Comment

The plan will make telehealth a foundational modality of care, with the option for patients to follow up with in-person visits if necessary.

By Kat Jercich

September 14, 202012:59 PM

In response to increasing patient demand for telehealth, Kaiser Permanente this week announced the launch of a new “virtual-first” healthcare plan in Washington state.

The plan, which will be available January 1, 2021, through Kaiser Foundation Health Plan of Washington’s direct to employer groups and consumers, will center telehealth as a foundational modality of care for patients with nonurgent issues.

“Virtual care is the health care of today and tomorrow,” said Dr. Paul Minardi, president and executive medical director of Washington Permanente Medical Group, in a statement. 

“The pandemic has reinforced the need to provide care in the most convenient, accessible, and safe way for our members, and that’s what Virtual Plus does,” he said.

WHY IT MATTERS

As with other providers, Kaiser Permanente says it saw a huge uptick in telehealth use since the start of the pandemic. According to the company, it was one of the first healthcare organizations to deliver the majority of care via telemedicine. 

The system’s options include its Consulting Nurse Service, Care Chat online messaging, video and phone visits. About 65% of appointments are now conducted virtually.

The new health plan will allow members to reach out via phone, online chat, video or email for nonurgent issues. According to the company, patients will see the same doctors and clinicians as they would at any Kaiser Permanente facility, with their data available through electronic health records.

Members can get in touch with their clinicians virtually, with the option to come in person for follow-up visits, says the organization.

They have access to Kaiser Permanente pharmacists via telehealth and can get medications delivered in one to two days.

“With this new plan, we are innovating to give our members more convenient care options at a more affordable cost, respecting their choices and preferences,” said Joseph Smith, vice president of sales and business development, Kaiser Foundation Health Plan of Washington, in a statement.

THE LARGER TREND

Although much attention has been paid of late to any upcoming physician fee schedules for telehealth under Medicare and Medicaid, some private payers have also taken strides to center virtual care in the future of their coverage.

This summer, Blue Cross Blue Shield of Tennessee announced that it would make in-network telehealth services permanent.

Telehealth “has opened up another opportunity for people to get care in a different way,” said BCBS Tennessee SVP and Chief Medical Officer Dr. Andrea Willis. “We are continuing this expansion for our commercial population because it feels like it’s the right thing to do.”

ON THE RECORD

“We are excited to offer another care option to our members that continues our commitment of providing high-quality, convenient, and affordable health care,” said Smith.

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: kjercich@himss.org
Healthcare IT News is a HIMSS Media publication.

Article link: https://www.healthcareitnews.com/news/kaiser-permanente-launches-virtual-first-health-plan-washington

Equality in the U.S. Starts with Better Jobs – HBR

Posted by timmreardon on 09/15/2020
Posted in: Uncategorized. Leave a comment

by Zeynep Ton

August 17, 2020

Executive Summary

The pandemic’s impact on frontline workers and recent incidents of police brutality have highlighted the urgent need to provide good jobs for people of color. For too long, millions of Americans have been left behind with low wages, few benefits, unstable schedules, and lack of respect and dignity. And for too long, American employers have assumed these conditions as an inevitability of doing business rather than a deliberate choice. Based on her research, the author offers six steps that business leaders should take to help bring about economic and racial justice. They will help their firms competitiveness as well.

Leer en español

Americans are demanding a reckoning. Incidents of police brutality and structural inequities that have caused the pandemic to hit people of color especially hard are sparking calls for racial justice. The precarious conditions endured by poorly paid frontline workers who have continued to stay on the job during the pandemic have generated calls for economic justice.  Each of these forms of injustice has distinct drivers, but they amplify each other and often fall hardest on the same people. As Martin Luther King, Jr. reminded us, economic and racial justice are inexorably linked.

To address this critical moment, we need business leaders to emulate the business leaders who, in the midst of World War II, committed to creating good jobs. For too long, millions of Americans have been left behind with low wages, few benefits, unstable schedules, and lack of respect and dignity. And for too long, American employers have assumed these conditions as an inevitability of doing business rather than a deliberate choice. But my research shows that bad jobs are a choice, not a necessity, and that offering good jobs is also a choice — even a profit-maximizing choice — yes, even for companies that compete on low cost.

Here I describe why business leaders should care about having too many employees with low-wage “bad jobs” and six steps to begin transforming those jobs into good jobs.

The Central Problem: Too Many People Left Behind

Even in the pre-Covid world of low unemployment, 32% of the U.S. workforce — that’s 46.5 million people — worked in occupations with a median wage of less than $15 an hour. With Covid-19, we started calling many of these workers “essential.” Yet one wouldn’t know it from their wages. The median wage for a meatpacker is $14 an hour; it’s only $12 for a health aide.

Imagine a single parent working as a bank teller and earning the median hourly wage for that job of $15.02. At 40 hours a week, she would make $2,580 a month — $494 less than her rent, child care, transportation, food, and medical expenses. That’s not even counting personal care, clothing, leisure, cable/phone bills, housekeeping supplies, and unexpected expenses such as a broken cell phone. And remember, many service workers don’t even get 40 hours a week, so their annual pay is much lower. Brookings Institute estimates that 53 million Americans make less than $17,950 a year.

Many live so close to the edge that they rely on government assistance. Twenty-five million working people and families received $61 billion in Employed Income Tax Credits in 2019. Forty percent of Americans, disproportionately Black, cannot absorb an unexpected $400 expense. Such families have no cushion when the full impact of Covid-19 hits, as has been made clear by the long lines at food banks.

This problem won’t be solved by upskilling or improving education — the current focus of the President’s Workforce Advisory Board. Future job growth is expected largely in low-wage jobs such as health aides, food and cleaning services, and laborer occupations. Most of the top 20 fastest-growing occupations — 55% of projected job growth — pay below the median wage. Think of the U.S. economy as an enormous ship with a hole in its hull. Those in the lower decks are at risk of drowning. Upskilling may move some of them to a dry deck, but there isn’t room there for all, and, anyway, the ship is still sinking. We need to fix the hole right now so no one drowns.

In those lower decks, the people most at risk of drowning are Black and Hispanic Americans, who are disproportionally represented in low-wage jobs. Black workers make up 25% and 37% of two of the fastest growing (but low-wage) occupations: personal care aides and home health aides. Company leaders who have spoken against racial injustice, committed funds to address racial injustice, or have hired chief diversity and inclusion officers should look at where people of color work in their company, as employees or contractors, and make sure their jobs allow them to live with dignity.

The Vicious Cycle of Poverty Hurts U.S. Society and Business

Low-wage workers live in a vicious cycle that prevents them from moving up. Many work multiple jobs. The associated stress undermines mental and physical health. Indeed, that stress lowers cognitive functioning, creating a “bandwidth tax” equal to a loss of 13 IQ points. Performance suffers as it is harder to keep up good attendance, focus on the job, be productive, and do your best for customers or coworkers. Unsurprisingly, these workers find it hard to climb the ladder of opportunity that this country has historically provided.

My research shows that the vicious cycle for low-wage workers is also a vicious cycle for their companies. Poor attendance and high turnover lead to operational problems that undermine sales and profits. Reduced profits, in turn, prevent companies from investing more in their workers, causing yet more instability for the companies and their workers. I’ve observed racial tensions to be worse at companies operating in this vicious cycle. People feel less respect and treat others, including customers, with less respect. Managers are busy fighting fires rather than leading their people. I’ve also observed once-successful companies go out of business at least partly on account of this vicious cycle.

If there ever was a lose-lose for workers and companies, bad jobs with low wages are it.

What we often fail to see is that wages are not just a neutral market valuation of what a job is worth. The wages themselves affect the quality of that work and therefore the worker’s productive value and career prospects. Put a capable person in a position where she must work two low-wage jobs — with uncertainty about her schedule, fear that she will not make rent, and little or no support from management to do a really good job — and she will likely resign herself to mediocre or poor performance.

This vicious cycle can be reversed. We already observe that where the minimum wage increases, workers’ well-being improves — with no negative effect on employment. Workers have fewer unmet medical needs, better nutrition, less smoking, less child neglect, fewer low-birth-weight babies, and fewer teen births. With more income, they spend more money and rely less on government benefits — all positives for the economy. What’s more, my colleague Hazhir Rahmandad and I find that even in low-cost service settings, paying higher wages and treating workers with respect and dignity can be profit-maximizing. Good jobs also create a competitive advantage by enabling firms to differentiate and to adapt better to change.

Here are six steps that business leaders can and should take to address these inequities.

1. Recognize the problem and make a collective pledge to address it. When it comes to the role of businesses as employers, the focus tends to be on upskilling workers through education and public-private partnerships. That’s important, but the inherent assumption that the problem is on the supply side (too few qualified workers) misses the more important problem on the demand side (too few good jobs).

Business leaders need to publicly recognize the problem on the demand side and pledge to create well-paying jobs, without which we cannot maintain and expand a strong middle class. Part of what drove leaders of companies such as GM, GE, Coca-Cola, and Kodak to commit so effectively to creating well-paying jobs during World War II was fear of socialism and communism.

The stakes for capitalism are similar now. Many Americans are losing faith in capitalism and market economies, believing that capitalism inherently drives not only inequality, but also injustice. Even before the pandemic, 70% of Americans believed the economic system was rigged against them.

2. Commit to raise low wages. All large companies should calculate the distribution of annual frontline take-home pay and consider the budgetary needs of their workers. (With so many working part-time or irregular hours, the hourly wage itself doesn’t tell the story.) How many are below the living wage in their area? (When we at the nonprofit Good Jobs Institute share these data with executives, they are often surprised.) How many are single parents or students? How many families have two wage-earners? How many full-timers rely on welfare?

With these data, a company can make commitments that are reasonable but bold — the most it can manage, not the least it can get away with. If profit margins are high and low-wage workers are only a small part of costs, doing the right thing is easy. That’s why, in 2015, after calculating the potential benefits from higher wages (e.g., lower turnover, better customer service from employees who can focus on the job) Aetna could raise its minimum wage from $12 an hour to $16 an hour. Last March, Bank of America raised its minimum wage to $20 an hour.For others, like Walmart, raising wages requires systemic change. But it can be done and, if done right, can help both employees and employers win.

During our current economic crisis, such a change may feel impossible.  But these commitments can be made over time. What matters is that companies consider cost of living, not just what others pay. Even more effective would be for large companies (or industry associations such as the National Retail Federation) to encourage other firms in their communities/industries to follow their lead. New research shows that raising wages is less competitively costly than companies may realize because once a large company raises wages, others in the area follow suit.

3. Provide career paths for low-wage workers. Companies such as Costco and QuikTrip — which offer careers, not merely jobs — aim for all frontline managers to be promoted from within. Committing to such a policy forces them to invest in the development of their workers. Equally important is entering gender and race into promotion decisions. In retail, for example, 18% of cashiers and 12% of salespeople but only 10% of frontline supervisors are Black.

4. Disclose pay and turnover data. Revealing annual turnover and the annual take-home pay distribution — not just the average or median — by race and gender might be uncomfortable but will help drive conversations with the board and investors. (Intel is one company that discloses annual take-home pay buckets by race and gender.) Such transparency will show who’s operating in the vicious cycle described above. Quantitative benchmarks can help create peer pressure and enable customers, communities, and investors to track change at different companies. Benchmarks can also help people such as my students, many of whom seek employers that take care of their workers, decide where to take their considerable talents when they graduate.

5. Involve workers in technology decisions that affect their work. Too often, the people who do the work are excluded from such decisions — to the detriment of companies. Workers with frontline knowledge can help companies be smarter about choosing, deploying, and scaling technologies. They can also help identify technologies that would complement people rather than “so-so” technologies that replace people but don’t even improve productivity. Companies that commit to Step 2 can justify higher wages by making better use of the talent that they already have.

6. Drive public policies that improve workers’ well-being and the economy. Some business leaders are already recommending higher minimum wages. Benefits such as paid sick leave, smart-scheduling legislation that can improve stability for workers and companies, and government-sponsored child care for low-income families could also improve workers’ well-being and the economy. Changes to tax code to favor investing in workers rather than automation would reduce incentives to invest in “so-so” technologies. Business leaders should be seen and heard advocating for them.

Even so, history shows that we can’t rely on business leaders alone. They need a context that pushes for good jobs. Apart from worker power and smart public policy, investors and business schools have important roles.

Investors. Since I’ve started working with companies, I’ve seen how infrequently their leaders take a long-term view. Even after believing in the financial and competitive reasons to offer good jobs, some shy away from investing in their employees. Why? They fear a dip in profitability. “Anything that doesn’t yield a return in a year doesn’t make it,” I hear.

This short-term focus is driven, in part, by executives’ short tenures (the median CEO tenure in public companies was five years in 2017, the most recent data available) and, in part, by their fear that investors will punish them. Indeed, when Walmart began investing in its workers in 2014, its stock price took a hit. If investors were less quick to punish a company for “profit-draining” labor investments and more interested in really profit-draining employee turnover (and in the drivers of turnover such as take-home pay, schedule stability, and career paths), we’d see more companies prioritizing investment in people — and doing better.

Business schools. For decades, schools taught that the corporation’s duty is to maximize shareholder value. It is time to teach how to make a decent profit decently. When it comes to exemplary CEOs, we should celebrate those like Costco’s Jim Sinegal, who created a competitive business without sacrificing the financial well-being of frontline employees.

We should also provide students with an opportunity to develop compassion for the people they will lead. While teaching at MIT’s Sloan School of Management and Harvard Business School, I have seen plenty of programs that allow students to spend time with business and political leaders but none that encouraged them to spend time in a low-wage job. That would help them see past the stigmas of low-wage work and workers, witness how poor corporate decisions get in the way of delivering value to customers and good jobs to employees, and cultivate respect for the challenging work that takes place across all levels the company.

The problem is vast, but the first step is for the job creators to reevaluate their assumptions about the jobs they create. It is easy to create a job that treats people like robots and justify it with the assumption that workers lack skills and abilities. But as we have seen, that attitude sows social unrest and puts a ceiling on the prospects of hardworking Americans and their communities.

If we are to live in a just world, we need to remember what Martin Luther King, Jr. told the sanitation workers in a crowded Memphis church in 1968: “So often we overlook the work and the significance of those who are not in professional jobs, of those who are not in the so-called big jobs. But let me say to you tonight, that whenever you are engaged in work that serves humanity and is for the building of humanity, it has dignity, and it has worth.”

Article link: https://hbr.org/2020/08/equality-in-the-u-s-starts-with-better-jobs

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (1)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...