healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Disconnected Systems Lead to Disconnected Care

Posted by timmreardon on 11/26/2025
Posted in: Uncategorized.

Introduction to Disconnected Systems Leading to Disconnected Care

Disconnected systems in healthcare undermine the promise of healthcare by failing to advance collaboration and coordination, resulting in providers being forced to make decisions without proper data. This causes gaps in care that directly affect safety and quality. Furthermore, patients often switch between different care facilities, yet their data often does not move with them. This results in disconnected care rather than a seamless, whole-person experience.

Fragmented EHRs Put Patient Safety at Risk

Fragmented EHR records create challenges that impact patients and providers. For instance, when information is incomplete or delayed, providers may accidentally order tests that were already ordered elsewhere or even miss crucial medication changes that should have affected their treatments. This lack of insight forces clinicians to rely on patient recall and manual workarounds, which ultimately are not reliable and increase the risk of preventable errors. Furthermore, for patients, these gaps translate into repeated procedures, inconsistent care plans, and frustration with having to navigate a system that often feels confusing and scattered.

Additionally, patient harm becomes even more crucial when data such as allergies, active prescriptions, or discharge instructions fail to update across systems. This goes past just a minor inconvenience and becomes an active risk to patient safety. For instance, this can cause medication errors, missed diagnoses, and delayed interventions. Furthermore, this is even more of a prominent issue within large federal health environments, where patients often receive care across multiple systems that operate independently rather than collaboratively.

How HITS Focuses on Reconnecting People & Data

HITS addresses this challenge by focusing on human-centered solutions that are built around real clinical workflows. This means actively engaging with stakeholders to understand how information fits into their daily routines and where breakdowns occur. By designing workflows so that the right information appears at the right moment, HITS helps to reduce cognitive burden and ensure that technology supports decision-making rather than hinders it. These refined workflows and standardized data exchange form the foundation for systems that promote collaboration in care. Furthermore, these redesigned workflows ensure that interoperability feels natural rather than forced.

Doctor in a white coat and gloves interacting with a futuristic digital interface displaying healthcare icons, representing innovative health information technology solutions that enhance patient care and medical data management.

Furthermore, HITS understands that technology alone is not enough without meaningful support, training, and communication. HITS does this by providing strategic communications and change management, which guide providers through new tools and processes with clear guidance, hands-on training, and continuous support. By prioritizing both end-user adoption and system integration, HITS ensures that organizations build environments where technology, workflow, and people all move together in the same direction.

Importance of Shifting Away from Disconnected Care

Disconnected systems continue to deliver disconnected care, directly affecting patients. This is especially critical for patients with chronic illnesses, who require day-to-day coordinated care. Therefore, HITS aims to combine standardized data exchange, intuitive provider workflows, and training, so that health systems transform fragmented environments into integrated systems where every provider has access to a complete picture before moving forward. This ensures that all patients receive care that is safer and coordinated.

HITS

HITS provides healthcare management services & works with users to develop health informatics tools that promote safe, secure, and reliable care experiences. We believe technology must be designed with empathy, accountability, transparency, and human-centered design at its core. By combining government expertise with healthcare management, we deliver collaborative, high-quality solutions across military, federal, and commercial sectors. We take pride in our services and settle for nothing other than 100% quality solutions for our clients. Having the right team assist with data sharing is crucial to encouraging collaborative and secure care. If you’re looking for the right team that delivers technology with heart, HITS is it! You can reach out to us directly at info@healthitsol.com. Check out this link if you’re interested in having a 15-minute consultation with us: https://bit.ly/3RLsRXR.

References

  1. https://pmc.ncbi.nlm.nih.gov/articles/PMC10170908/
  2. https://insiteone.com/healthcares-silent-crisis-how-fragmented-it-systems-compromise-patient-care/

Article link: https://healthinformationtechnologysolutions.com/disconnected-systems-lead-to-disconnected-care/

How organizations build a culture of AI ethics – MIT Sloan Management

Posted by timmreardon on 11/19/2025
Posted in: Uncategorized.


by

Kristin Burnham

 Apr 8, 2025

Why It Matters

From risk management policies to the five stages of AI ethics, here’s how some organizations approach ethical AI. Share 

Artificial intelligence is revolutionizing industries, by automating customer service, optimizing supply chains, personalizing marketing campaigns, and in countless other ways. But with great power comes great responsibility — and risks. 

Organizations today must work to ensure that the AI systems they build or implement are safe, secure, unbiased, and transparent, according to Thomas Davenport, a Babson University professor and visiting scholar at the MIT Initiative on the Digital Economy. 

During a recent webinar hosted by MIT Sloan Management Review, Davenport highlighted a number of ethical risks that AI can introduce to businesses. These include algorithmic bias in machine learning, varying levels of model transparency, cybersecurity vulnerabilities, and the possibility that AI will serve users insensitive or inappropriate content. Organizations must also contend with whether AI will deliver useful results. 

To counteract these risks, organizations need to embed ethical practices into AI solutions from the start, Davenport said. They also need to ensure that teams and individuals are engaged in ethical AI as a part of their everyday work. 

Here’s a look at several responsible AI strategies that organizations are using today and how businesses can progress from discussing AI ethics to taking action. 

AI strategies at Unilever and Scotiabank 

Companies are adopting a variety of strategies to integrate AI ethics into their operations, Davenport said, including appointing heads of AI ethics, performing research about the topic, conducting beta testing, and using external assessors to evaluate use cases.

Consumer packaged goods company Unilever, for example, created an AI assurance function that examines each new AI application to determine its risk level in terms of both effectiveness and ethics, Davenport said. This process requires individuals who propose a use case to fill out a questionnaire. An AI-based application then determines whether the use case is likely to be approved or whether there are underlying problems with the use case. This process ensures that AI models are aligned with ethical guidelines before deployment. 

Scotiabank developed an AI risk management policy and a data ethics team to advance a data ethics policy, Davenport said. The policies are part of the Canadian bank’s code of conduct, which all employees must agree to follow each year. The company also requires mandatory data ethics education for all employees working in the customer insights, data, and analytics organization or on other analytics teams. Scotiabank also worked with Deloitte to develop an automated ethics assistant — similar to Unilever’s — that reviews each use case before its deployed. The bank also involves employees in managing unstructured data to determine the most effective answers to customer questions.

“[This] kind of democratization of the process is important not only to your ethics, but also to your productivity as an organization in getting these systems up and running,” Davenport said.

5 stages of AI ethics

Davenport identified five stages that play crucial roles in fostering ethical AI development, deployment, and use within organizations. As companies advance through the stages, they move from talk to action, he noted. 

  1. Evangelism. In this stage, representatives of the company speak internally and externally about the importance of AI ethics.
  2. Policies. The company deliberates on and approves a set of corporate policies to ensure ethical approaches to AI are established.
  3. Documentation. The company records data on each AI use case. This could include the use of model cards, which explain how models were designed to be used and how they have been evaluated.
  4. Review. The company performs or sponsors a systematic review of each use case to determine whether it meets the company’s criteria for responsible AI.
  5. Action. The company develops a process whereby each use case is either accepted as is, returned to the proposing owner for revision, or rejected.

Davenport said that it’s important for organizations to make strategic plans for integrating ethics into their AI strategy. “What use cases might make sense for us? We develop a model, we deploy the model, we monitor the model, and ethics come into place throughout that entire process,” he said. “That’s what the most successful organizations do with regard to AI ethics.”  

WATCH THE WEBINAR: HOW TO BUILD AN ETHICAL AI CULTURE

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/how-organizations-build-a-culture-ai-ethics?

AI crawler wars threaten to make the web more closed for everyone – MIT Technology Review

Posted by timmreardon on 11/19/2025
Posted in: Uncategorized.

There’s an accelerating cat-and-mouse game between web publishers and AI crawlers, and we all stand to lose. 

By Shayne Longprearchive page

    February 11, 2025

    We often take the internet for granted. It’s an ocean of information at our fingertips—and it simply works. But this system relies on swarms of “crawlers”—bots that roam the web, visit millions of websites every day, and report what they see. This is how Google powers its search engines, how Amazon sets competitive prices, and how Kayak aggregates travel listings. Beyond the world of commerce, crawlers are essential for monitoring web security, enabling accessibility tools, and preserving historical archives. Academics, journalists, and civil societies also rely on them to conduct crucial investigative research.  

    Crawlers are endemic. Now representing half of all internet traffic, they will soon outpace human traffic. This unseen subway of the web ferries information from site to site, day and night. And as of late, they serve one more purpose: Companies such as OpenAI use web-crawled data to train their artificial intelligence systems, like ChatGPT. 

    Understandably, websites are now fighting back for fear that this invasive species—AI crawlers—will help displace them. But there’s a problem: This pushback is also threatening the transparency and open borders of the web, that allow non-AI applications to flourish. Unless we are thoughtful about how we fix this, the web will increasingly be fortified with logins, paywalls, and access tolls that inhibit not just AI but the biodiversity of real users and useful crawlers.

    A system in turmoil 

    To grasp the problem, it’s important to understand how the web worked until recently, when crawlers and websites operated together in relative symbiosis. Crawlers were largely undisruptive and could even be beneficial, bringing people to websites from search engines like Google or Bing in exchange for their data. In turn, websites imposed few restrictions on crawlers, even helping them navigate their sites. Websites then and now use machine-readable files, called robots.txt files, to specify what content they wanted crawlers to leave alone. But there were few efforts to enforce these rules or identify crawlers that ignored them. The stakes seemed low, so sites didn’t invest in obstructing those crawlers.

    But now the popularity of AI has thrown the crawler ecosystem into disarray.

    As with an invasive species, crawlers for AI have an insatiable and undiscerning appetite for data, hoovering up Wikipedia articles, academic papers, and posts on Reddit, review websites, and blogs. All forms of data are on the menu—text, tables, images, audio, and video. And the AI systems that result can (but not always will) be used in ways that compete directly with their sources of data. News sites fear AI chatbots will lure away their readers; artists and designers fear that AI image generators will seduce their clients; and coding forums fear that AI code generators will supplant their contributors. 

    In response, websites are starting to turn crawlers away at the door. The motivator is largely the same: AI systems, and the crawlers that power them, may undercut the economic interests of anyone who publishes content to the web—by using the websites’ own data. This realization has ignited a series of crawler wars rippling beneath the surface.

    The fightback

    Web publishers have responded to AI with a trifecta of lawsuits, legislation, and computer science. What began with a litany of copyright infringement suits, including one from the New York Times, has turned into a wave of restrictions on use of websites’ data, as well as legislation such as the EU AI Act to protect copyright holders’ ability to opt out of AI training. 

    However, legal and legislative verdicts could take years, while the consequences of AI adoption are immediate. So in the meantime, data creators have focused on tightening the data faucet at the source: web crawlers. Since mid-2023, websites have erected crawler restrictions to over 25% of the highest-quality data. Yet many of these restrictions can be simply ignored, and while major AI developers like OpenAI and Anthropic do claim to respect websites’ restrictions, they’ve been accused of ignoring them or aggressively overwhelmingwebsites (the major technical support forum iFixit is among those making such allegations).

    Now websites are turning to their last alternative: anti-crawling technologies. A plethora of new startups (TollBit, ScalePost, etc), and web infrastructure companies like Cloudflare (estimated to support 20% of global web traffic), have begun to offer tools to detect, block, and charge nonhuman traffic. These tools erect obstacles that make sites harder to navigate or require crawlers to register.

    These measures still offer immediate protection. After all, AI companies can’t use what they can’t obtain, regardless of how courts rule on copyright and fair use. But the effect is that large web publishers, forums, and sites are often raising the drawbridge to all crawlers—even those that pose no threat. This is even the case once they ink lucrative deals with AI companies that want to preserve exclusivity over that data. Ultimately, the web is being subdivided into territories where fewer crawlers are welcome.

    How we stand to lose out

    As this cat-and-mouse game accelerates, big players tend to outlast little ones.  Large websites and publishers will defend their content in court or negotiate contracts. And massive tech companies can afford to license large data sets or create powerful crawlers to circumvent restrictions. But small creators, such as visual artists, YouTube educators, or bloggers, may feel they have only two options: hide their content behind logins and paywalls, or take it offline entirely. For real users, this is making it harder to access news articles, see content from their favorite creators, and navigate the web without hitting logins, subscription demands, and captchas each step of the way.

    Perhaps more concerning is the way large, exclusive contracts with AI companies are subdividing the web. Each deal raises the website’s incentive to remain exclusive and block anyone else from accessing the data—competitor or not. This will likely lead to further concentration of power in the hands of fewer AI developers and data publishers. A future where only large companies can license or crawl critical web data would suppress competition and fail to serve real users or many of the copyright holders.

    Put simply, following this path will shrink the biodiversity of the web. Crawlers from academic researchers, journalists, and non-AI applications may increasingly be denied open access. Unless we can nurture an ecosystem with different rules for different data uses, we may end up with strict borders across the web, exacting a price on openness and transparency. 

    While this path is not easily avoided, defenders of the open internet can insist on laws, policies, and technical infrastructure that explicitly protect noncompeting uses of web data from exclusive contracts while still protecting data creators and publishers. These rights are not at odds. We have so much to lose or gain from the fight to get data access right across the internet. As websites look for ways to adapt, we mustn’t sacrifice the open web on the altar of commercial AI.

    Shayne Longpre is a PhD Candidate at MIT, where his research focuses on the intersection of AI and policy. He leads the Data Provenance Initiative.

    Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2025/02/11/1111518/ai-crawler-wars-closed-web/amp/

    IBM CEO predicts quantum computing breakthrough in 3-5 years | Karl Haller

    Posted by timmreardon on 11/15/2025
    Posted in: Uncategorized.

    When IBM CEO Arvind Krishna says quantum computing is “three to five years away from shocking people,” it’s time to sit up and take notice.

    I shared earlier highlights of the AI portion of Arvind’s discussion with Malcolm Gladwell. They also went into #quantum, which is getting closer to becoming a reality for enterprises.

    3-5 years is 2028-30 — not so far into the future. And when he says “shocking,” he’s talking about:

    • Making billions in the financial markets. Appr. $13 trillion moves through the financial markets each day. Even a 1 basis point improvement is $130 billion. Now you see why the recent paper by HSBC that “using quantum computer, bond trading was 34% more accurate than their prior technique” caused a ripple.
    • Dramatic improvements in operational efficiency. “Let’s take a post office in a mid-sized country [that] … burns 1 billion gallons of fuel per year.” Optimizing last-mile delivery is the classic #travelingsalesman problem. But we can currently only get to 80% efficiency. With quantum, if we can get another 10%, that’s 100M gallons of fuel, which could drive hundreds of millions in savings.

    “These are pretty attractive problems to go after.” Indeed.

    Today’s challenge is scale. “Quantum computing today is where GPUs and AI were in 2015.” But if your business has challenges that are solvable with quantum, and you’re not starting to experiment with it now, you may risk being “out of business in 10 years.”

    Quantum represents a “new kind of math” which enables us to solve “new kinds of problems.” For retailers and brands, I think about three things:

    • Merchandise planning at the item x store level that’s accurate to within 1-2% of demand.
    • Personalization at scale that actually delivers on the 1to1 promise we’ve been chasing for 25 years.
    • The ability to monitor E2E (farm to home) supply chains and dynamically trigger IFTTT actions with little-to-no human intervention.
      (and i’m sure there are many others)

    Arvind sees quantum as being “equal to semiconductors” in the ranking of technology advancements of the past 150 years. Yet it’s barely part of today’s conversations.

    The internet had its #NetscapeMoment.
    With AI, it was the launch of #ChatGPT.
    Quantum will hit its #TippingPoint as well … and it will probably happen sooner than we think.

    YouTube video — https://lnkd.in/eE9FdEU3
    IBM Smart Talks — https://lnkd.in/eHhgTF7z
    (also available on all major #podcast platforms)

    Article link: https://www.linkedin.com/posts/karlhaller_quantum-travelingsalesman-netscapemoment-activity-7392561559732105217-J44C?

    The Quantum Mirage

    Posted by timmreardon on 11/15/2025
    Posted in: Uncategorized.

    Most people don’t realize this, but the belief-to-reality ratio in quantum computing is completely upside down.

    After four decades of work and hundreds of billions spent, we still do not have a single fault-tolerant quantum computer. Yet if you look around the ecosystem, about 80 to 90 percent of people are still believers. Maybe 5 to 10 percent are cautious skeptics. And roughly 1 percent are true non-believers who actually understand the physics well enough to see the dead ends coming.

    Quantum systems collapse faster than they compute, and classical overhead grows faster than any claimed quantum advantage. That’s the bottleneck no roadmap, no marketing, and no optimism has ever solved.

    The believers are not stupid. They are just trapped in three patterns.

    • First, sunk cost. When so much money, reputation, and career prestige is tied to a dream, no one wants to be the first to say it failed.
    • Second, optimism. People love the story that relentless effort will eventually bend nature to our will. Sometimes it does. Sometimes nature says no.
    • Third, jargon. Quantum mechanics is weird enough that anything wrapped in the right vocabulary can sound plausible. Marketing teams take advantage of that.

    I am on the other side for a simple reason. I do not care about narratives. I only care about physics and engineering. If there were a real path to scalable, fault-tolerant quantum computing, I would have spotted it, worked on it, and solved it already. There is no path. That is why the field relies on belief instead of reproducible results.

    Quantum computing runs on hype, hope, and heroic storytelling.
    My work runs on physics.

    That is the real divide.

    Read my other post to see why I don’t believe in quantum computing.
    https://lnkd.in/ghQZvYfa

    Article link: https://www.linkedin.com/posts/alan-shields-56963035a_the-quantum-mirage-most-people-dont-realize-activity-7394590236082814977-019u?

    New MIT report captures state of quantum computing – MIT Sloan Management

    Posted by timmreardon on 11/15/2025
    Posted in: Uncategorized.

    by Beth Stackpole

    Aug 19, 2025

    Quantum computing is evolving into a tangible technology that holds significant business and commercial promise, although the exact timing of when it will impact those areas remains unclear, according to a new report led by researchers at the MIT Initiative on the Digital Economy.

    The “Quantum Index Report 2025” charts the technology’s momentum, with a comprehensive, data-driven assessment of the state of quantum technologies.

    “There are a lot of folks who are interested in what’s going on in quantum, but the field is impenetrable to them,” said Jonathan Ruane, a research scientist at MIT IDE and editor-in-chief of the “Quantum Index Report.”

    The “Quantum Index Report” is a comprehensive assessment of the technology and the global landscape, from patents to the quantum workforce. Share 

    Why It Matters

    Quantum computing is evolving into a tangible technology that holds significant business and commercial promise, although the exact timing of when it will impact those areas remains unclear, according to a new report led by researchers at the MIT Initiative on the Digital Economy.

    Here are five insights from the inaugural report.

    The “Quantum Index Report 2025” charts the technology’s momentum, with a comprehensive, data-driven assessment of the state of quantum technologies. 

    The inaugural report aims to make quantum computing and networking technologies more accessible to entrepreneurs, investors, teachers, and business decision makers — all of whom will play a critical role in how quantum computing is developed, commercialized, and governed. 

    “There are a lot of folks who are interested in what’s going on in quantum, but the field is impenetrable to them,” said Jonathan Ruane, a research scientist at MIT IDE and editor-in-chief of the “Quantum Index Report.” The report is co-authored by researchers Elif Kiesow and Johannes Galatsanos from MIT IDE, and Carl Dukatz, Edward Blomquist, and Prashant Shuklafrom Accenture.

    Senior business executives across industries are fast becoming what Ruane calls “quantum curious,” inspired in part by the rapid rise of artificial intelligence. “The speed at which AI is transforming industries has alerted managers to the concept that technologies that are simmering in the background can explode really quickly and have tremendous impact,” Ruane said. “They want to make sure they have competency and insights into quantum so they don’t get caught out on missing the next big thing.”

    A wide range of quantum impacts

    The “Quantum Index Report” team considered activity in the quantum sector through a broad range of perspectives, from both publicly available data and novel, original data. The entire report, raw data, and data visualizations are available on an interactive website. 

    While the research team acknowledges the still-nascent nature of the quantum computing field and some inherent bias in the 2025 research, including a U.S. focus, Ruane stressed that there is substantial market momentum underway.

    Insights from the “Quantum Index Report 2025” include the following:

    Quantum processor performance is improving, with the U. S. leading the field. Two-dozen manufacturers are now commercially offering more than 40 quantum processing units (QPUs), which are the processing hardware for a quantum computer. This is an indicator that the technology is becoming more accessible to business. While there have been impressive advancements in performance, QPUs do not yet meet the requirements for running large-scale commercial applications such as chemical simulations or cryptanalysis.

    Quantum technology patents are soaring, with the total number increasing fivefold from 2014 to 2024. Corporations and universities are spearheading innovation efforts, accounting for 91% of the patents filed, with corporations holding 54% and universities 37%. China held 60% of quantum patents as of 2024, followed by the U.S. and Japan.

    Venture capital funding for quantum technology reached a new high point in 2024. Quantum computing firms received the most funding ($1.6 billion in publicly announced investments), followed by quantum software companies at $621 million. The researchers note that quantum received less than 1% of total venture capital funding worldwide.

    Businesses are buzzing over quantum computing.The report tracks how often the technology was mentioned across more than 50,000 corporate communication vehicles, including press releases and earnings calls, from 2022 to 2024. There was a significant uptick in mentions each quarter in 2024, with the frequency outpacing that of previous years by a substantial margin. The researchers said that this positively correlates with the maturing of the quantum market and the growing presence of quantum technology in mainstream business discourse.  

    Quantum skills and training are growing in importance as companies begin to focus on workforce development. The demand for quantum skills has nearly tripled since 2018, according to the report. In response, universities are establishing quantum hubs and standing up programs that connect business leaders with researchers. 

    “What we are seeing here is rapid progress and developments across a range of vectors — not just the improvement of technology benchmarks or the performance of quantum processing units,” Ruane said. “We are also seeing impact across a wide range of areas that are important to business leaders. It sends a signal that there’s breadth and depth in development.”

    READ THE “QUANTUM INDEX REPORT 2025” 

    Why AI for good depends on good data – Amazon Science

    Posted by timmreardon on 11/12/2025
    Posted in: Uncategorized.
    New technologies are helping vulnerable communities produce maps that integrate topographical, infrastructural, seasonal, and real-time data — an essential tool for many humanitarian endeavors.

    By Dr. Werner Vogels 

    October 14, 2025

    This is a condensed version of a talk that Amazon vice president and chief technology officer Dr. Werner Vogels gave at the AI for Good Global Summit in July 2025 in Geneva.

    In January 2007, my mentor, friend, and fellow computer scientist Jim Gray, a Turing Award laureate often described as the father of modern database systems, disappeared while sailing solo to the Farallon Islands off San Francisco. Despite deploying every technological resource imaginable, from repositioning government satellites to mobilizing thousands of recruits through Amazon’s Mechanical Turk to analyze satellite images, we never found him. If we had today’s AI resources, would the result have been different? Maybe. There are things that we can do now that we definitely could not do in 2007.

    While Jim’s friends were able to use their private-sector relationships and government clearances to access real-time satellite data, most vulnerable communities remain invisible in our digital representations of Earth. The Haiti earthquake of 2010 made this painfully clear. International rescue teams arrived in Port-au-Prince to find a city that was, for all practical purposes, unmapped. Emergency responders had GPS coordinates but couldn’t navigate because the maps they had couldn’t distinguish between alleys and major roadways or locate critical infrastructure like hospitals and shelters.

    The data divide

    The situation in Haiti isn’t unique. Consider Makoko, a community in Lagos, Nigeria, that is home to more than 300,000 people living on stilt houses in the Lagos Lagoon. On most maps, this entire community appears as a blank blue spot. These people are effectively invisible, unable to access basic services because they don’t exist in our spatial data models.

    The reason for this omission is simple: most maps are created for commercial purposes, not humanitarian needs. We meticulously map shopping districts in major cities but leave vast swaths of the developing world uncharted. This creates what I call the “data divide”, a disparity in data access that mirrors and exacerbates existing social inequalities. When we only map what’s profitable, we perpetuate these inequalities and leave the most vulnerable communities exposed.

    Now, if you think about maps, there’s not just one map of the earth. The moment you have a traditional map in your hand, it is out of date. Effective maps are multilayered systems operating across different timescales.

    First, there’s the Earth layer, the slow-changing geographical features that remain constant over decades or centuries. The Himalayas or Amazon Basin won’t be moving anytime soon. Then there’s the infrastructure layer — roads, bridges, and buildings that evolve over years. Next comes the seasonal layer, which tracks changes in vegetation, water levels, and other environmental factors that shift with the seasons. Finally, there’s the real-time layer, a constantly fluctuating stream of data about human activity, weather patterns, and emergency situations.

    Humanitarian mapping must integrate all these layers. During a flood, for example, we need real-time data about water levels (real-time layer), historical flood patterns (seasonal layer), existing drainage infrastructure (infrastructure layer), and underlying topography (Earth layer). Combining these data streams requires sophisticated AI models that can handle multiple data types and temporal scales.

    Democratizing Earth data

    The good news is that the tools for data collection have become much more accessible. The number of Earth observation satellites has exploded from about 150 in 2008 to over 10,000 today. These satellites offer not just high-resolution imagery but advanced sensors like multispectral imagers, radar, and lidar.

    In the aftermath of the Haiti earthquake, roughly 600 members of the OpenStreetMap community were able to create the first reliable crisis map within 48 hours. It only took two days to go from unmapped to mapped. This crowdsourced map became the default navigation tool for every major responding organization, from the UN to the US Marine Corps. OpenStreetMap has since evolved into a global platform for collaborative mapping, with spinoffs like the Humanitarian OpenStreetMap Team (HOT) and Missing Maps focusing specifically on crisis response.

    Drones have emerged as a powerful complement to satellites, filling gaps where satellite imagery is insufficient or too expensive. The Mapping Makoko project trained local residents to pilot drones and map their community. This initiative did more than create a map; it empowered residents with a tool for political advocacy, demonstrating the power of democratized data collection.

    While satellites and drones provide macro-level data, mobile devices and Internet-of-things (IoT) sensors offer granular, real-time information. With over eight billion mobile devices globally, we have an unprecedented opportunity for crowdsourced data collection. In Southeast Asia, the Grab app (a super-app providing everything from ride hailing to food delivery) has created detailed maps of previously unmapped areas simply by tracking the routes of its drivers, who are familiar with neighborhoods, alleys, and unmarked homes. Similarly, India’s Namma Yatri app connects auto-rickshaw drivers with passengers while simultaneously generating accurate street maps of informal settlements.

    IoT sensors embedded in infrastructure provide another layer of real-time data. Environmental sensors tracking air quality, water levels, or seismic activity can feed directly into mapping systems, creating a dynamic representation of a community’s current state.

    Building with open data

    During a recent visit to Rwanda, I saw firsthand how data-driven mapping can transform healthcare delivery. The Rwanda Health Intelligence Center uses real-time data to track healthcare utilization across the country. By combining this with geospatial data, they’ve calculated the maximum walking distance for pregnant women to reach a health center. This data directly informs where to build new facilities, optimizing resource allocation.

    Another inspiring example is the Ocean Cleanup project, which aims to remove 90% of ocean plastic by 2040. They’ve developed a river model using drones, AI analysis, and GPS-tagged dummy plastics to predict plastic-flow patterns. This data-driven approach allows them to position their cleanup systems in the most effective locations, while AI-powered cameras on bridges identify different types of plastic in real time.

    The sheer volume of geospatial data — hundreds of petabytes from satellites, drones, and IoT sensors — requires robust infrastructure. Cloud platforms like Amazon S3, which processes over a quadrillion requests every year, make it possible to store and process this data at scale. Our Open Data Sponsorship Program further removes barriers by covering costs for high-value public datasets, including OpenStreetMap, Sentinel-2 imagery, and various environmental-sensor data.

    Planetary problem-solving machine

    The combination of open data, advanced AI models, and cloud infrastructure creates what I call a planetary problem-solving machine. This trio can tackle challenges that were previously intractable. Open data ensures transparency and verifiability, while AI extracts insights that would be impossible for humans to discern.

    When we have data that could save lives or protect the environment, keeping it private is morally indefensible. The United Nations’ 17 Sustainable Development Goals all depend on geospatial data. Whether it’s ending poverty, achieving food security, or combating climate change, every goal requires location-based data to measure progress and guide interventions.

    The question for all of us is, what data do we have that could be useful to others? And more importantly, what data can we open up? If we don’t act, we risk perpetuating a world where the most vulnerable remain invisible, where disasters are compounded by lack of information, and where progress is measured only in places that are profitable.

    It is for this exact reason, that in 2024 I launched the Now Go Build CTO Fellowship. Bringing together technology leaders from non-profits and social good organizations that are working to address climate change, disaster management, healthcare accessibility, food security, education and pairing them with experts at Amazon, AWS and beyond. I’ve seen first-hand, how these Fellows are using data to solve the world’s hardest problems, whether that’s measuring crop yields, connecting surplus food with charities and families, or piloting drones in conflict areas, none of which is possible without maps.

    Maps have always been more than navigation tools: they’re instruments of power. In the digital age, they’re becoming tools of justice, healthcare, and environmental protection. By making the invisible visible, we can create a more equitable world.

    Now go build.

    Article link: https://www.amazon.science/blog/why-ai-for-good-depends-on-good-data?

    Are hospitals and health systems really ready for AI? – Healthcare IT News

    Posted by timmreardon on 11/08/2025
    Posted in: Uncategorized.

    Healthcare leaders are bullish about the benefits of artificial intelligence, a new report from Kyndryl shows, but many are still grappling with basic questions around IT infrastructure, cybersecurity, regulations, workforce and change management.

    By Andrea Fox , Senior Editor | October 27, 2025 | 10:34 AM

    To unlock the full value of artificial intelligence at scale, healthcare organizations need to modernize their IT stack, improve cybersecurity and invest in upskilling their workforce alongside their technology strategy, a new report suggests.

    Drawing on data from its own AI-powered digital business platform, along with insights from some 3,700 business executives worldwide, the 2025 readiness report from infrastructure services company Kyndryl shows that healthcare (and other industries) is at a tipping point.

    AI has been making inroads and soon could bring substantial ROI. But there are challenges.

    For one, healthcare organizations are still falling behind on mitigating cybersecurity risks. More fundamentally, however, many are still hamstrung by a lack of alignment among C-suite leadership and key frontline staff about how to scale AI beyond initial pilot phases.

    To unlock AI’s full value, organizations need to modernize their infrastructure, improve cybersecurity and invest in upskilling their workforce alongside their technology strategy, said Trent Sanders, vice president for U.S. healthcare and life sciences at Kyndryl.

    “The organizations that succeed will be those that pair innovation with a culture that’s ready to embrace it,” Sanders told Healthcare IT News.

    ‘Seamless integration’

    The majority of business leaders across industries think AI will transform day-to-day functions over the next 12 months, according to the new report.

    But nearly half (49%) of businesses assessed by Kyndryl researchers are still seeing innovation delayed by a lack of technical readiness and a range of uncertainties.

    “Among organizations not yet seeing positive returns from AI, 35% blame integration difficulties,” they said in the October report. “Without seamless integration, even the most advanced technologies fail to deliver value.”

    But there’s some good news for healthcare: The industry is listed as one of the top performers across industries for AI-enabled automation.

    “High automation density correlates with accelerated recovery times, reduced human error and enhanced scalability,” said Kyndryl researchers. “It also enables predictive analytics and supports proactive remediation strategies, and 32% of organizations experienced reduced costs due to automation and optimization in the past 12 months.”

    Still, healthcare organizations continue to grapple with how to scale AI and workforce readiness – from technical skills to trust and beyond.

    The Kyndryl report notes that healthcare may be particularly vulnerable to AI integration difficulties and faces specific barriers that need to be overcome before more widespread positive ROI.

    ‘The issue isn’t just technical’

    Nearly one in five healthcare technologies are at or nearing their end, creating roadblocks to innovation, Sanders said.

    “Healthcare’s complexity is both its strength and its challenge,” he explained.

    “Healthcare providers are dealing with legacy systems, fragmented data environments and strict compliance requirements, all of which make AI integration more difficult than in other industries,” said Sanders.

    “But the issue isn’t just technical. Integration often stalls because leadership teams aren’t aligned on how to scale AI beyond the pilot phase.”

    To unlock ROI, he said he advises healthcare organizations to modernize their infrastructure and build hybrid cloud environments that support secure data flows.

    “When technology and leadership move in sync,” said Sanders, “that’s when AI starts delivering real value.”

    Healthcare organizations are also behind on their readiness to mitigate business risk – with just 38%, compared to 42% across industries, upgrading their infrastructure and investing in cybersecurity.

    “That’s concerning, especially when 85% of healthcare organizations have experienced a cyber-related outage in the past year,” said Sanders, who noted that “agility is as much about speed as it is about resilience.”

    Healthcare can improve agility, and some modernization strategies can make that happen quickly, he said.

    “Many systems are outdated, and that technical debt slows everything down,” said Sanders. “Quick wins come from replacing end-of-service assets, using AI to strengthen cyber defenses and fostering a culture that supports fast, informed decision-making.”

    Automation, where healthcare tends to succeed when compared to other industries, is not only reducing costs for healthcare organizations but also improving scalability.

    “Automation is absolutely helping healthcare organizations cut costs, but it’s also doing much more,” Sanders said. “High automation density means fewer manual errors, faster recovery times and better scalability. In healthcare, that translates to smoother operations and improved patient care.”

    But that automation “doesn’t work in isolation,” he pointed out. “The organizations seeing the biggest gains are also investing in cloud modernization and aligning their workforce around new ways of working. It’s the combination of automation plus intentional strategy that drives real impact.”

    ‘Growth rather than disruption’

    Kyndryl’s researchers found that 84% of healthcare leaders expect AI to completely transform roles within the next 12 months, despite legacy systems and other integration challenges.

    “It’s a bold prediction, and it speaks to the urgency healthcare leaders feel,” said Sanders. “That’s not just optimism; it’s a recognition that AI is no longer optional. It’s the lever for transformation.”

    He acknowledged that expectations alone won’t close the gap.

    The organizations that succeed in preparing their workforces “will be those that pair innovation with a culture that’s ready to embrace it,” he said.

    “When employees are brought into the process, trained and empowered, AI becomes a tool for growth rather than disruption.”

    Healthcare organizations can advance AI readiness despite workforce challenges – a shortage of talent and a lack of skills.

    “Skilling challenges are one of the biggest hurdles healthcare organizations face,” said Sanders. “AI is evolving fast, but our workforce isn’t always equipped to keep pace; technical skills, cognitive adaptability and trust in AI are all critical, and right now, they’re in short supply.”

    Aware of the gap, healthcare leaders are building trust with “transparency, ethical guidelines and involving employees in the implementation process,” he explained.

    “From there, it’s about investing in upskilling and reskilling, and creating cultures that embrace change,” said Sanders. “Organizations with adaptable cultures are significantly more likely to report positive ROI on AI. That’s not a coincidence. It’s a blueprint.” 

    Andrea Fox is senior editor of Healthcare IT News.
    Email: afox@himss.org
    Healthcare IT News is a HIMSS Media publication.

    Article link: https://www.healthcareitnews.com/news/are-hospitals-and-health-systems-really-ready-ai

    Make no mistake—AI is owned by Big Tech – MIT Technology Review

    Posted by timmreardon on 10/30/2025
    Posted in: Uncategorized.


    If we’re not careful, Microsoft, Amazon, and other large companies will leverage their position to set the policy agenda for AI, as they have in many other sectors.

    By Amba Kak, Sarah Myers West, &Meredith Whittaker

    December 5, 2023

    Until late November, when the epic saga of OpenAI’s board breakdown unfolded, the casual observer could be forgiven for assuming that the industry around generative AI was a vibrant competitive ecosystem. 

    But this is not the case—nor has it ever been. And understanding why is fundamental to understanding what AI is, and what threats it poses. Put simply, in the context of the current paradigm of building larger- and larger-scale AI systems, there is no AI without Big Tech. With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms. All rely on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and on those same firms’ vast consumer market reach to deploy and sell their AI products. 

    Indeed, many startups simply license and rebrand AI models created and sold by these tech giants or their partner startups. This is because large tech firms have accrued significant advantages over the past decade. Thanks to platform dominance and the self-reinforcing properties of the surveillance business model, they own and control the ingredients necessary to develop and deploy large-scale AI. They also shape the incentive structures for the field of research and development in AI, defining the technology’s present and future. 

    The recent OpenAI saga, in which Microsoft exerted its quiet but firm dominance over the “capped profit” entity, provides a powerful demonstration of what we’ve been analyzing for the last half-decade. To wit: those with the money make the rules. And right now, they’re engaged in a race to the bottom, releasing systems before they’re ready in an attempt to retain their dominant position. 

    Concentrated power isn’t just a problem for markets. Relying on a few unaccountable corporate actors for core infrastructure is a problem for democracy, culture, and individual and collective agency. Without significant intervention, the AI market will only end up rewarding and entrenching the very same companies that reaped the profits of the invasive surveillance business model that has powered the commercial internet, often at the expense of the public. 

    The Cambridge Analytica scandal was just one among many that exposed this seedy reality. Such concentration also creates single points of failure, which raises real security threats. And Securities and Exchange Commission chair Gary Gensler has warned that having a small number of AI models and actors at the foundation of the AI ecosystem poses systemic risks to the financial order, in which the effects of a single failure could be distributed much more widely. 

    The assertion that AI is contingent on—and exacerbates—concentration of power in the tech industry has often been met with pushback. Investors who have moved quickly from Web3 to the metaverse to AI are keen to realize returns in an ecosystem where a frenzied press cycle drives valuations toward profitable IPOs and acquisitions, even if the promises of the  technology in question aren’t ever realized. 

    But the attempted ouster—and subsequent reintegration—of OpenAI cofounders Sam Altman and Greg Brockman doesn’t just bring the power and influence of Microsoft into sharp focus; it also proves our case that these commercial arrangements give Big Tech profound control over the trajectory of AI. The story is fairly simple: after apparently being blindsided by the board’s decision, Microsoft moved to protect its investment and its road map to profit. The company quickly exerted its weight, rallying behind Altman and promising to “acquihire” those who wanted to defect. 

    Microsoft now has a seat on OpenAI’s board,albeit a nonvoting one. But the true leverage that Big Tech holds in the AI landscape is the combination of its computing power, data, and vast market reach. In order to pursue its bigger-is-better approach to AI development, OpenAI made a deal. It exclusively licenses its GPT-4 system and all other OpenAI models to Microsoft in exchange for access to Microsoft’s computing infrastructure. 

    For companies hoping to build base models, there is little alternative to working with either Microsoft, Google, or Amazon. And those at the center of AI are well aware of this, as illustrated by Sam Altman’s furtive search for Saudi and Emirati sovereign investment in a hardware venture he hoped would rival Nvidia. That company holds a near monopoly on state-of-the-art chips for AI training and is another key choke point along the AI supply chain. US regulators have since unwound an initial investment by Saudi Arabia into an Altman-backed company, RainAI, reinforcing the difficulty OpenAI faces in navigating the even more concentrated chipmaking market.

    There are few meaningful alternatives, even for those willing to go the extra mile to build industry-independent AI. As we’ve outlined elsewhere, “‘open-source AI”—an ill-defined term that’s currently used to describe everything from Meta’s (comparatively closed) LLaMA-2 and Eleuther’s (maximally open) Pythia series—can’t on its own offer escape velocity from industry concentration. For one thing, many open-source AI projects operate through compute credits, revenue sharing, or other contractual arrangements with tech giants that grapple with the same structural dependencies. In addition, Big Tech has a long legacy of capturing, or otherwise attempting to seek profit from, open-source development. Open-source AI can offer transparency, reusability, and extensibility, and these can be positive. But it does not address the problem of concentrated power in the AI market. 

    The OpenAI-Microsoft saga also demonstrates a fact that’s frequently lost in the hype around AI: there isn’t yet a clear business model outside of increasing cloud profits for Big Tech by bundling AI services with cloud infrastructure. And a business model is important when you’re talking about systems that can cost hundreds of millions of dollars to train and develop. 

    Microsoft isn’t alone here: Amazon, for example, runs a marketplace for AI models, on which all of its products, and a handful of others, operate using Amazon Web Services. The company recently struck an investment deal of up to $4 billion with Anthropic, which has also pledged to use Amazon’s in-house chip, Trainium, optimized for building large-scale AI. 

    Big Tech is becoming increasingly assertive in its maneuverings to protect its hold over the market. Make no mistake: though OpenAI was in the crosshairs this time, now that we’ve all seen what it looks like for a small entity when a big firm it depends on decides to flex, others will be paying attention and falling in line. 

    Regulation could help, but government policy often winds up entrenching, rather than mitigating, the power of these companies as they leverage their access to money and their political clout. Take Microsoft’s recent moves in the UK as an example: last week it announced a £2.5 billion investment in building out cloud infrastructure in the UK, a move lauded by a prime minister who has clearly signaled his ambitions to build a homegrown AI sector in the UK as his primary legacy. This news can’t be read in isolation: it is a clear attempt to blunt an investigation into the cloud market by the UK’s competition regulator following a studythat specifically called out concerns registered by a range of market participants regarding Microsoft’s anticompetitive behavior. 

    From OpenAI’s (ultimately empty) threat to leave the EU over the AI Act to Meta’s lobbying to exempt open-source AI from basic accountability obligations to Microsoft’s push for restrictive licensing to the Big Tech–funded campaign to embed fellows in Congress, we’re seeing increasingly aggressive stances from large firms that are trying to shore up their dominance by wielding their considerable economic and political power.

    Tech industry giants are already circling their wagons as new regulations emerge from the White House, the EU, and elsewhere. But it’s clear we need to go much further. Now’s the time for a meaningful and robust accountability regime that places the interests of the public above the promises of firms not known for keeping them. 

    We need aggressive transparency mandates that clear away the opacity around fundamental issues like the data AI companies are accessing to train their models. We also need liability regimes that place the burden on companies to demonstrate that they meet baseline privacy, security, and bias standards before their AI products are publicly released. And to begin to address concentration, we need bold regulation that forces business separation between different layers of the AI stack and doesn’t allow Big Tech to leverage its dominance in infrastructure to consolidate its position in the market for AI models and applications. 

    There are few meaningful alternatives, even for those willing to go the extra mile to build industry-independent AI. As we’ve outlined elsewhere, “‘open-source AI”—an ill-defined term that’s currently used to describe everything from Meta’s (comparatively closed) LLaMA-2 and Eleuther’s (maximally open) Pythia series—can’t on its own offer escape velocity from industry concentration. For one thing, many open-source AI projects operate through compute credits, revenue sharing, or other contractual arrangements with tech giants that grapple with the same structural dependencies. In addition, Big Tech has a long legacy of capturing, or otherwise attempting to seek profit from, open-source development. Open-source AI can offer transparency, reusability, and extensibility, and these can be positive. But it does not address the problem of concentrated power in the AI market. 

    The OpenAI-Microsoft saga also demonstrates a fact that’s frequently lost in the hype around AI: there isn’t yet a clear business model outside of increasing cloud profits for Big Tech by bundling AI services with cloud infrastructure. And a business model is important when you’re talking about systems that can cost hundreds of millions of dollars to train and develop. 

    Microsoft isn’t alone here: Amazon, for example, runs a marketplace for AI models, on which all of its products, and a handful of others, operate using Amazon Web Services. The company recently struck an investment deal of up to $4 billion with Anthropic, which has also pledged to use Amazon’s in-house chip, Trainium, optimized for building large-scale AI. 

    Big Tech is becoming increasingly assertive in its maneuverings to protect its hold over the market. Make no mistake: though OpenAI was in the crosshairs this time, now that we’ve all seen what it looks like for a small entity when a big firm it depends on decides to flex, others will be paying attention and falling in line. 

    Regulation could help, but government policy often winds up entrenching, rather than mitigating, the power of these companies as they leverage their access to money and their political clout. Take Microsoft’s recent moves in the UK as an example: last week it announced a £2.5 billion investment in building out cloud infrastructure in the UK, a move lauded by a prime minister who has clearly signaled his ambitions to build a homegrown AI sector in the UK as his primary legacy. This news can’t be read in isolation: it is a clear attempt to blunt an investigation into the cloud market by the UK’s competition regulator following a studythat specifically called out concerns registered by a range of market participants regarding Microsoft’s anticompetitive behavior. 

    From OpenAI’s (ultimately empty) threat to leave the EU over the AI Act to Meta’s lobbying to exempt open-source AI from basic accountability obligations to Microsoft’s push for restrictive licensing to the Big Tech–funded campaign to embed fellows in Congress, we’re seeing increasingly aggressive stances from large firms that are trying to shore up their dominance by wielding their considerable economic and political power.

    Tech industry giants are already circling their wagons as new regulations emerge from the White House, the EU, and elsewhere. But it’s clear we need to go much further. Now’s the time for a meaningful and robust accountability regime that places the interests of the public above the promises of firms not known for keeping them. 

    We need aggressive transparency mandates that clear away the opacity around fundamental issues like the data AI companies are accessing to train their models. We also need liability regimes that place the burden on companies to demonstrate that they meet baseline privacy, security, and bias standards before their AI products are publicly released. And to begin to address concentration, we need bold regulation that forces business separation between different layers of the AI stack and doesn’t allow Big Tech to leverage its dominance in infrastructure to consolidate its position in the market for AI models and applications. 

    Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/12/05/1084393/make-no-mistake-ai-is-owned-by-big-tech/amp/

    AI implementation strategies: 4 insights from MIT Sloan Management Review

    Posted by timmreardon on 10/30/2025
    Posted in: Uncategorized.


    by Brian Eastwood

     Oct 6, 2025

    What you’ll learn:

    • Apollo Global Management is assessing AI value across entire industries to cut costs and improve productivity in its portfolio companies.
    • Michelin’s proof-of-concept approach has identified 200-plus AI use cases that generate 50 million euros in ROI annually.
    • “Vibe analytics” can help leaders get data insights in minutes instead of weeks.
    • A new framework examines four modes of human-robot collaboration in warehouses.

    When implementing artificial intelligence, enterprise leaders must consider where AI will create value, not just where it will be useful. The latest ideas from MIT Sloan Management Review illustrate how to make this happen in verticals as varied as manufacturing, publishing, cybersecurity, and e-commerce — with specific takeaways for warehouse operations.

    Assess AI’s impact across an industry 

    One important sign that AI has the potential to create value is the willingness of private equity firms to build AI capabilities into portfolio companies, according to Thomas H. Davenport, a research fellow with the MIT Initiative on the Digital Economy. Consider how Apollo Global Management has taken AI from pilot to production to scale across its portfolio:

    • Educational publisher Cengage has cut content production costs by 40% and lead generation costs by 20% through process automation.
    • At Yahoo, AI-generated code has helped engineering teams improve productivity by more than 20%.
    • Chemical distributor Univar Solutions achieved a 30% engagement rate with an AI agent that reached out to dormant accounts. 

    Apollo starts by evaluating AI’s risks and rewards at a macro level — assessing how AI is impacting not just the target company but its entire industry. Davenport and co-author Randy Bean write that this helps Apollo avoid high-risk scenarios and ensure that innovation happens in the right place at the right time. Next, Apollo develops AI use cases for the target company and crafts a post-acquisition implementation plan. 

    Apollo also partnered with a venture capital firm to launch an incubator for B2B AI startups that includes companies focused on supply chain resiliency, manufacturing response time, and a range of other AI capabilities. The expectation is that the startups will eventually provide services to Apollo’s portfolio companies.

    Find AI’s value at proof of concept

    For some enterprises, AI efforts focus on core business processes. For others, it’s all about innovation from the ground up. Multinational manufacturer Michelin Group has managed to do both. In a second analysis, Davenport and Bean highlight how Michelin is accelerating manufacturing innovation, with more than 200 AI use cases across quality control, inventory management, and predictive modeling. 

    The primary lever for AI adoption, group chief data and AI officer Ambica Rajagopal said, is identifying potential value at the proof-of-concept stage. Rajagopal’s team also conducts a post-deployment assessment of the actual value delivered.

    From there, Michelin’s innovation team of 6,000 employees across 13 countries gets to work. All told, improved productivity from AI projects now generates more than 50 million euros in ROI per year, with a growth rate increase approaching 40% annually. That has helped Michelin empower employees in all roles to harness data, create value, and support the company’s growth. 

    Empower leaders to ask questions to data sets

    If leaders could get data insights in minutes instead of weeks, they would be well positioned to determine which initiatives show the most promise. Michael Schrage, a research fellow with the MIT Initiative on the Digital Economy, examines an approach dubbed “vibe analytics.” This approach lets decision makers engage directly with data through AI-powered conversation, eliminating the traditional translation process between business questions and technical analysis.

    The concept builds on “vibe coding,” an AI-assisted approach that lets people create code using everyday language. Vibe analytics allows leaders to ask questions like “What’s happening with our conversion rates?” and immediately explore potential causes through improvisational dialogue with AI. 

    Vibe analytics stands to democratize how knowledge is generated in organizations, Schrage writes. Instead of waiting for static reports, leaders can start a direct dialogue with messy data. And teams can turn KPIs into conversational partners and debug assumptions in real time, accelerating decision-making while revealing unexpected patterns.

    Employing vibe analytics, a Southeast Asian telecom company surfaced more financially relevant insights in 90 minutes than it typically generates in 90 days, developing a novel scoring system that reveals which service contracts correlate with higher margins and risks. Meanwhile, a cybersecurity firm discovered actionable patterns in its freemium customer base that its revenue team hadn’t considered.

    Help robots and workers get along

    While teams of robots and humans are increasingly common in warehouses, effective collaboration remains elusive. Workers struggle to keep pace with robots that rarely need breaks, and they get frustrated with rigid automated systems. Slowing robots down or taking them offline entirely is good for morale but bad for cost management.

    With that in mind, Benedict Jun Ma and Maria Jesús Saénz at the MIT Digital Supply Chain Transformation Lab created a framework to describe human-robot collaboration in warehouses and distribution facilities, where they see AI becoming a critical tool for improving how humans and robots work together. They begin with four modes of collaboration:

    • Robot-in-lead, ideal for unloading cargo and picking simple orders (such as in a shoe warehouse).
    • Human-in-lead, helpful for packaging orders (especially for high-value items).
    • Elementary collaboration, where robots gather items for workers to sort.
    • Advanced collaboration, where AI helps robots better match human speed and strength, as well as forecast and manage disruption.

    As the authors note, AI is vital to advanced human-robot collaboration. It gives robots contextual awareness, such as the processing of fragile goods that require special handling; it also optimizes robots’ movement through the warehouse and supports audio or visual communication with human workers. AI models can also assess robots’ performance, recommend adjustments, and provide alerts to human workers.


    This article draws on insights from MIT Sloan Management Review, which leads the discourse about advances in management practice among influential thought leaders in business and academia. The publication equips its readers with evidence-based insights and guidance to innovate, operate, lead, and create value in a world being transformed by technology and large-scale societal and environmental forces.

    Article link: https://mitsloan.mit.edu/ideas-made-to-matter/ai-implementation-strategies-4-insights-mit-sloan-management-review?

    Posts navigation

    ← Older Entries
    Newer Entries →
    • Search site

    • Follow healthcarereimagined on WordPress.com
    • Recent Posts

      • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
      • Governance Before Crisis We still have time to get this right. 01/21/2026
      • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
      • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
      • ChatGPT Health Is a Terrible Idea 01/09/2026
      • Choose the human path for AI – MIT Sloan 01/09/2026
      • Why AI predictions are so hard – MIT Technology Review 01/07/2026
      • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
      • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
      • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
    • Categories

      • Accountable Care Organizations
      • ACOs
      • AHRQ
      • American Board of Internal Medicine
      • Big Data
      • Blue Button
      • Board Certification
      • Cancer Treatment
      • Data Science
      • Digital Services Playbook
      • DoD
      • EHR Interoperability
      • EHR Usability
      • Emergency Medicine
      • FDA
      • FDASIA
      • GAO Reports
      • Genetic Data
      • Genetic Research
      • Genomic Data
      • Global Standards
      • Health Care Costs
      • Health Care Economics
      • Health IT adoption
      • Health Outcomes
      • Healthcare Delivery
      • Healthcare Informatics
      • Healthcare Outcomes
      • Healthcare Security
      • Helathcare Delivery
      • HHS
      • HIPAA
      • ICD-10
      • Innovation
      • Integrated Electronic Health Records
      • IT Acquisition
      • JASONS
      • Lab Report Access
      • Military Health System Reform
      • Mobile Health
      • Mobile Healthcare
      • National Health IT System
      • NSF
      • ONC Reports to Congress
      • Oncology
      • Open Data
      • Patient Centered Medical Home
      • Patient Portals
      • PCMH
      • Precision Medicine
      • Primary Care
      • Public Health
      • Quadruple Aim
      • Quality Measures
      • Rehab Medicine
      • TechFAR Handbook
      • Triple Aim
      • U.S. Air Force Medicine
      • U.S. Army
      • U.S. Army Medicine
      • U.S. Navy Medicine
      • U.S. Surgeon General
      • Uncategorized
      • Value-based Care
      • Veterans Affairs
      • Warrior Transistion Units
      • XPRIZE
    • Archives

      • January 2026 (8)
      • December 2025 (11)
      • November 2025 (9)
      • October 2025 (10)
      • September 2025 (4)
      • August 2025 (7)
      • July 2025 (2)
      • June 2025 (9)
      • May 2025 (4)
      • April 2025 (11)
      • March 2025 (11)
      • February 2025 (10)
      • January 2025 (12)
      • December 2024 (12)
      • November 2024 (7)
      • October 2024 (5)
      • September 2024 (9)
      • August 2024 (10)
      • July 2024 (13)
      • June 2024 (18)
      • May 2024 (10)
      • April 2024 (19)
      • March 2024 (35)
      • February 2024 (23)
      • January 2024 (16)
      • December 2023 (22)
      • November 2023 (38)
      • October 2023 (24)
      • September 2023 (24)
      • August 2023 (34)
      • July 2023 (33)
      • June 2023 (30)
      • May 2023 (35)
      • April 2023 (30)
      • March 2023 (30)
      • February 2023 (15)
      • January 2023 (17)
      • December 2022 (10)
      • November 2022 (7)
      • October 2022 (22)
      • September 2022 (16)
      • August 2022 (33)
      • July 2022 (28)
      • June 2022 (42)
      • May 2022 (53)
      • April 2022 (35)
      • March 2022 (37)
      • February 2022 (21)
      • January 2022 (28)
      • December 2021 (23)
      • November 2021 (12)
      • October 2021 (10)
      • September 2021 (4)
      • August 2021 (4)
      • July 2021 (4)
      • May 2021 (3)
      • April 2021 (1)
      • March 2021 (2)
      • February 2021 (1)
      • January 2021 (4)
      • December 2020 (7)
      • November 2020 (2)
      • October 2020 (4)
      • September 2020 (7)
      • August 2020 (11)
      • July 2020 (3)
      • June 2020 (5)
      • April 2020 (3)
      • March 2020 (1)
      • February 2020 (1)
      • January 2020 (2)
      • December 2019 (2)
      • November 2019 (1)
      • September 2019 (4)
      • August 2019 (3)
      • July 2019 (5)
      • June 2019 (10)
      • May 2019 (8)
      • April 2019 (6)
      • March 2019 (7)
      • February 2019 (17)
      • January 2019 (14)
      • December 2018 (10)
      • November 2018 (20)
      • October 2018 (14)
      • September 2018 (27)
      • August 2018 (19)
      • July 2018 (16)
      • June 2018 (18)
      • May 2018 (28)
      • April 2018 (3)
      • March 2018 (11)
      • February 2018 (5)
      • January 2018 (10)
      • December 2017 (20)
      • November 2017 (30)
      • October 2017 (33)
      • September 2017 (11)
      • August 2017 (13)
      • July 2017 (9)
      • June 2017 (8)
      • May 2017 (9)
      • April 2017 (4)
      • March 2017 (12)
      • December 2016 (3)
      • September 2016 (4)
      • August 2016 (1)
      • July 2016 (7)
      • June 2016 (7)
      • April 2016 (4)
      • March 2016 (7)
      • February 2016 (1)
      • January 2016 (3)
      • November 2015 (3)
      • October 2015 (2)
      • September 2015 (9)
      • August 2015 (6)
      • June 2015 (5)
      • May 2015 (6)
      • April 2015 (3)
      • March 2015 (16)
      • February 2015 (10)
      • January 2015 (16)
      • December 2014 (9)
      • November 2014 (7)
      • October 2014 (21)
      • September 2014 (8)
      • August 2014 (9)
      • July 2014 (7)
      • June 2014 (5)
      • May 2014 (8)
      • April 2014 (19)
      • March 2014 (8)
      • February 2014 (9)
      • January 2014 (31)
      • December 2013 (23)
      • November 2013 (48)
      • October 2013 (25)
    • Tags

      Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
    • Upcoming Events

    Blog at WordPress.com.
    healthcarereimagined
    Blog at WordPress.com.
    • Subscribe Subscribed
      • healthcarereimagined
      • Join 153 other subscribers
      • Already have a WordPress.com account? Log in now.
      • healthcarereimagined
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...