healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Artificial Intelligence Impacts on Privacy Law – RAND

Posted by timmreardon on 08/26/2024
Posted in: Uncategorized.

Tifani Sadek, Karlyn D. Stanley, Gregory Smith, Krystyna Marcinek, Paul Cormarie, Salil Gunashekar

RESEARCH Published Aug 8, 2024

Key Takeaways

  • One of the aspects of artificial intelligence (AI) that makes it difficult to regulate is algorithmic opacity, or the potential lack of understanding of exactly how an algorithm may use, collect, or alter data or make decisions based on those data.
  • Potential regulatory solutions include (1) minimizing the data that companies can collect and use and (2) mandating audits and disclosures of the use of AI.
  • A key issue is finding the right balance of regulation and innovation. Focusing on the data used in AI, the purposes of the use of AI, and the outcomes of the use of the AI can potentially alleviate this concern.

The European Union (EU)’s Artificial Intelligence (AI) Act is a landmark piece of legislation that lays out a detailed and wide-ranging framework for the comprehensive regulation of AI deployment in the European Union covering the development, testing, and use of AI.[1]This is one of several reports intended to serve as succinct snapshots of a variety of interconnected subjects that are central to AI governance discussions in the United States, in the European Union, and globally. This report, which focuses on AI impacts on privacy law, is not intended to provide a comprehensive analysis but rather to spark dialogue among stakeholders on specific facets of AI governance, especially as AI applications proliferate worldwide and complex governance debates persist. Although we refrain from offering definitive recommendations, we explore a set of priority options that the United States could consider in relation to different aspects of AI governance in light of the EU AI Act.

AI promises to usher in an era of rapid technological evolution that could affect virtually every aspect of society in both positive and negative manners. The beneficial features of AI require the collection, processing, and interpretation of large amounts of data—including personal and sensitive data. As a result, questions surrounding data protection and privacy rights have surfaced in the public discourse. 

Privacy protection plays a pivotal role in individuals maintaining control over their personal information and in agencies safeguarding individuals’ sensitive information and preventing the fraudulent use and unauthorized access of individuals’ data. Despite this, the United States lacks a comprehensive federal statutory or regulatory framework governing data rights, privacy, and protection. Currently, the only consumer protections that exist are state-specific privacy laws and federal laws that offer limited protection in specific contexts, such as health information. The fragmented nature of a state-by-state data rights regime can make compliance unduly difficult and can stifle innovation.[2] For this reason, President Joe Biden called on Congress to pass bipartisan legislation “to better protect Americans’ privacy, including from the risks posed by AI.”[3]

There have been several attempts at comprehensive federal privacy legislation, including the American Privacy Rights Act (APRA), which aims to protect the collection, transfer and use of Americans’ data in most circumstances.[4] Although some data privacy issues could be addressed in legislation, there would still be gaps in data protection because of AI’s unique attributes. In this report, we identify those gaps and highlight possible options to address them. 

Nature of the Problem: Privacy Concerns Specific to AI

From a data privacy perspective, one of AI’s most concerning aspects is the potential lack of understanding of exactly how an algorithm may use, collect, or alter data or make decisions based on those data.[5] This potential lack of understanding is referred to as algorithmic opacity, and it can result from the inherent complexity of the algorithm, the purposeful concealment of a company using trade secrets law to protect its algorithm, or the use of machine learning to build the algorithm—in which case, even the algorithm’s creators may not be able predict how it will perform.[6] Algorithmic opacity can make it difficult or impossible to see how data inputs are being transformed into data or decision outputs, limiting the ability to inspect or regulate the AI in question.[7]

There are other general privacy concerns that take on unique aspects related to AI or that are further exaggerated by the unique characteristics of AI:[8]

  • Data repurposing refers to data being used beyond their intended and stated purpose and without the data subject’s knowledge or consent. In a general privacy context, an example would be when contact information collected for a purchase receipt is later used for marketing purposes for a different product. In an AI context, data repurposing can occur when biographical data collected from one person are fed into an algorithm that then learns from the patterns associated with that person’s data. For example, the stimulus package in the wake of the 2008 financial crisis included funding for digitization of health care records for the purpose of easily transferring health care data between health care providers, a benefit for the individual patient.[9]However, hospitals and insurers might use medical algorithms to determine individual health risks and eligibility to receive medical treatment, a purpose not originally intended.[10] A particular problem with data privacy in AI use is that existing data sets collected over the past decade may be used and recombined in ways that could not be reasonably foreseen and incorporated into decisionmaking at the time of collection.[11]
  • Data spillovers occur when data are collected on individuals who were not intended to be included when the data were collected. An example would be the use of AI to analyze a photograph taken of a consenting individual that also includes others who have not consented. Another example may be the relatives of a person who uploads their genetic data profile to a genetic data aggregator, such as 23andMe.
  • Data persistence refers to data existing longer than reasonably anticipated at the time of collection and possibly beyond the lifespan of the human subjects who created the data or consented to their use. This issue is caused by the fact that once digital data are created, they are difficult to delete completely, especially if the data are incorporated into an AI algorithm and repackaged or repurposed.[12] As the costs of storing and maintaining data have plummeted over the past decade, even the smallest organizations have the ability to indefinitely store data, adding to the occurrence of data persistence issues.[13] This is concerning from a privacy point of view because privacy preferences typically change over time. For example, individuals tend to become more conservative with their privacy preferences as they grow older.[14]With the issue of data persistence, consent given in early adulthood may lead to data being used over and after the course of the individual’s lifetime.

Possible Options to Address AI’s Unique Privacy Risks

In most comprehensive data privacy proposals, the foundation is typically set by providing individuals with fundamental rights over their data and privacy, from which the remaining system unfolds. The EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act—two notable comprehensive data privacy regimes—both begin with a general guarantee of fundamental rights protection, including rights to privacy, personal data protection, and nondiscrimination.[15] Specifically, the data protection rights include the right to know what data have been collected, the right to know what data have been shared with third parties and with whom, the right to have data deleted, and the right to have incorrect data corrected.

Prior to the proliferation of AI, good data management systems and procedures could make compliance with these rights attainable. However, the AI privacy concerns listed above render full compliance more difficult. Specifically, algorithmic opacity makes it difficult to know how data are being used, so it becomes more difficult to know when data have been shared or whether they have been completely deleted. Data repurposing makes it difficult to know how data have been used and with whom they have been shared. Data spillover makes it difficult to know exactly what data have been collected on a particular individual. These issues, along with the plummeting cost of data storage, exacerbate data persistence, or the maintenance of data beyond their intended use or purpose. 

The unique problems associated with AI give rise to several options for resolving or mitigating these issues. Here, we provide summaries of these options and examples of how they have been enacted within other regulatory regimes.

Data Minimization and Limitation

Data minimization refers to the practice of limiting the collection of personal information to that which is directly relevant and necessary to accomplish specific and narrowly identified goals.[16]This stands in contrast to the approach used by many companies today, which is to collect as much information as possible. Under the tenets of data minimization, the use of any data would be legally limited only to the use identified at the time of collection.[17] Data minimization is also a key privacy principle in reducing the risks associated with privacy breaches.

Several privacy frameworks incorporate the concept of data minimization. The proposed APRA includes data minimization standards that prevent collection, processing, retention, or transfer of data “beyond what is necessary, proportionate, or limited to provide or maintain” a product or service.[18]The EU GDPR notes that
“[p]ersonal data should be adequate, relevant and limited to what is necessary for the purposes for which they are processed.”[19]The EU AI Act also reaffirms that the principles of data minimization and data protection apply to AI systems throughout their entire life cycles whenever personal data are processed.[20] The EU AI Act also imposes strict rules on collecting and using biometric data. For example, it prohibits AI systems that “create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV [closed-circuit television] footage.”[21]

As another example of a way to incorporate data minimization, APRA would establish a duty of loyalty, which prohibits covered entities from collecting, using, or transferring covered data beyond what is reasonably necessary to provide the service requested by the individual, unless the use is one of the explicitly permissible purposes listed in the bill.[22] Among other things, the bill would require covered entities to get a consumer’s affirmative, express consent before transferring their sensitive covered data to a third party, unless a specific exception applies.

Algorithmic Impact Assessments for Public Use

Algorithmic impact assessments (AIAs) are intended to require accountability for organizations that deploy automated decisionmaking systems.[23] AIAs counter the problem of algorithmic opacity by surfacing potential harms caused by the use of AI in decisionmaking and call for organizations to take practical steps to mitigate any identifiable harms.[24] An AIA would mandate disclosure of proposed and existing AI-based decision systems, including their purpose, reach, and potential impacts on communities, before such algorithms were deployed.[25] When applied to public organizations, AIAs shed light on the use of the algorithm and help avoid political backlash regarding systems that the public does not trust.[26]

APRA includes a requirement that large data holders conduct AIAs that weigh the benefits of their privacy practices against any adverse consequences.[27] These assessments would describe the entity’s steps to mitigate potential harms resulting from its algorithms, among other requirements. The bill requires large data holders to submit these AIAs to the Federal Trade Commission and make them available to Congress on request. Similarly, the EU GDPR mandates data protection impact assessments (PIAs) to highlight the risks of automated systems used to evaluate people based on their personal data.[28] The AIAs and the PIAs are similar, but they differ substantially in their scope: While PIAs focus on rights and freedoms of data subjects affected by the processing of their personal data, AIAs address risks posed by the use of nonpersonal data.[29] The EU AI Act further expands the notion of impact assessment to encompass broader risks to fundamental rights not covered under the GDPR. Specifically, the EU AI Act mandates that bodies governed by public law, private providers of public services (such as education, health care, social services, housing, and administration of justice), and banking and insurance service providers using AI systems must conduct fundamental rights impact assessments before deploying high-risk AI systems.[30]

Algorithmic Audits

Whereas the AIA assesses impact, an algorithmic audit is “a structured assessment of an AI system to ensure it aligns with predefined objectives, standards, and legal requirements.”[31] In such an audit, the system’s design, inputs, outputs, use cases, and overall performance are examined thoroughly to identify any gaps, flaws, or risks.[32] A proper algorithmic audit includes definite and clear audit objectives, such as verifying performance and accuracy, as well as standardized metrics and benchmarks to evaluate a system’s performance.[33] In the context of privacy, an audit can confirm that data are being used within the context of the subjects’ consent and the tenets of applicable regulations.[34]

During the initial stage of an audit, the system is documented, and processes are designated as low, medium, or high risk depending on such factors as the context in which the system is used and the type of data it relies on.[35] After documentation, the system is assessed on its efficacy, bias, transparency, and privacy protection.[36]Then, the outcomes of the assessment are used to identify actions that can lower any identified risks. Such actions may be technical or nontechnical in nature.[37]

This notion of algorithmic audit is embedded in the conformity assessment foreseen by the EU AI Act.[38] The conformity assessment is a formal process in which a provider of a high-risk AI system has to demonstrate compliance with requirements for such systems, including those concerning data and data governance. Specifically, the EU AI Act requires that training, validation, and testing datasets are subject to data governance and management practices appropriate for the system’s intended purpose. In the case of personal data, those practices should concern the original purpose and origin as well as data collection processes.[39] Upon completion of the assessment, the entity is required to draft written EU Declarations of Conformity for each relevant system, and these must be maintained for ten years.[40]

Mandatory Disclosures

Mandatory AI disclosures are another option to address privacy concerns, such as by requiring that uses of AI should be disclosed to the public by the organization employing the technology.[41] For example, the EU AI Act mandates AI-generated content labeling. Furthermore, the EU AI Act requires disclosure when people are exposed to AI systems that can assign them to groups or infer their emotions or intentions based on biometric data (unless the system is intended for law enforcement purposes).[42] Legislation introduced in the United States called the AI Labeling Act of 2023 would require that companies properly label and disclose when they use an AI-generated product, such as a chatbot.[43] The proposed legislation also calls for generative AI system developers to implement reasonable procedures to prevent downstream use of those systems without proper disclosure.[44]

Considerations for Policymakers

As noted in the previous section, some of these options have been proposed in the United States. Others have been applied in other countries. A key issue is finding the right balance of regulation and innovation. Industry and wider stakeholder input into regulation may help alleviate concerns that regulations could throttle the development and implementation of the benefits offered by AI. Seemingly, the EU AI Act takes this approach for drafting the codes of practice for general-purpose AI models: The EU AI Office is expected to invite stakeholders—especially developers of models—to participate in drafting the codes, which will operationalize many EU AI Act requirements.[45] The EU AI Act also has explicit provisions for supporting innovation, especially among small and medium-sized enterprises, including start-ups.[46] For example, the law introduces regulatory sandboxes: controlled testing and experimentation environments under strict regulatory oversight. The specific purpose of these sandboxes is to foster AI innovation by providing frameworks to develop, train, validate, and test AI systems in a way that ensures compliance with the EU AI Act, thus alleviating legal uncertainty for providers.[47]

AI technology is an umbrella term used for various types of technology, from generative AI used to power chatbots to neural networks that spot potential fraud on credit cards.[48] Moreover, AI technology is advancing rapidly and will continue to change dramatically over the coming years. For this reason, rather than focusing on the details of the underlying technology, legislators might consider regulating the outcomes of the algorithms. Such regulatory resiliency may be accomplished by applying the rules to the data that go into the algorithms, the purposes for which those data are used, and the outcomes that are generated.[49]

Article link: https://www.rand.org/pubs/research_reports/RRA3243-2.html?

Author Affiliations

Tifani Sedek is a professor at the University of Michigan Law School. From RAND, Karlyn D. Stanley is a senior policy researcher, Gregory Smith is a policy analyst, Krystyna Marcinek is an associate policy researcher, Paul Cormarie is a policy analyst, and Salil Gunashekar is an associate director at RAND Europe.

Notes

  • [1] European Union, “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance,” June 13, 2024. Hereafter cited as the EU AI Act, this legislation was adopted by the European Parliament in March 2024 and approved by the European Council in June 2024. As of July 16, 2024, all text cited in this report related to the EU AI Act can be found at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689#d1e5435-1-1
  • [2] Brenna Goth, “Varied Data Privacy Laws Across States Raise Compliance Stakes,” Bloomberg Law, October 11, 2023.
  • [3] White House, “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” October 30, 2023. 
  • [4] U.S. House of Representatives, American Privacy Rights Act, Bill 8818, June 25, 2024. As of July 18, 2024: https://www.congress.gov/bill/118th-congress/house-bill/8818
  • [5] Jenna Burrell, “How the Machine Thinks: Understanding Opacity in Machine Learning Algorithms,” Big Data & Society, Vol. 3, No. 1, June 2016.
  • [6] Sylvia Lu, “Data Privacy, Human Rights, and Algorithmic Opacity,” California Law Review, Vol. 110, 2022. 
  • [7] Markus Langer, “Introducing a Multi-Stakeholder Perspective on Opacity, Transparency and Strategies to Reduce Opacity in Algorithm-Based Human Resource Management,” Human Resource Management Review, Vol. 33, No. 1, March 2023. 
  • [8] Catherine Tucker, “Privacy, Algorithms, and Artificial Intelligence,” in Ajay Agrawal, Joshua Gans, and Avi Goldfarb, eds., The Economics of Artificial Intelligence: An Agenda, University of Chicago Press, May 2019, p. 423.
  • [9] Public Law 110-185, Economic Stimulus Act of 2008, February 13, 2008; Sara Green, Line Hillersdal, Jette Holt, Klaus Hoeyer, and Sarah Wadmann, “The Practical Ethics of Repurposing Health Data: How to Acknowledge Invisible Data Work and the Need for Prioritization,” Medicine, Health Care and Philosophy, Vol. 26, No. 1, 2023.
  • [10] Starre Vartan, “Racial Bias Found in a Major Health Care Risk Algorithm,” Scientific American, October 24, 2019. 
  • [11] Tucker, 2019, p. 430.
  • [12] Tucker, 2019, p. 426.
  • [13] Stephen Pastis, “A.I.’s Un-Learning Problem: Researchers Say It’s Virtually Impossible to Make an A.I. Model ‘Forget’ the Things It Learns from Private User Data,” Forbes, August 30, 2023. 
  • [14] Tucker, 2019, p. 427.
  • [15] European Union, General Data Protection Regulation, May 25, 2018 (hereafter, GDPR, 2018); California Civil Code, Division 3, Obligations; Part 4, Obligations Arising from Particular Transactions; Title 1.81.5, California Consumer Privacy Act of 2018. Fabienne Ufert, “AI Regulation Through the Lens of Fundamental Rights: How Well Does the GDPR Address the Challenges Posed by AI?” European Papers, September 20, 2020.
  • [16] White House, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, October 2022, p. 33. 
  • [17] WaTech, “Data Minimization,” webpage, undated. As of June 6, 2024:
    https://watech.wa.gov/data-minimization
  • [18] U.S. House of Representatives, 2024. 
  • [19] GDPR, 2018.
  • [20]EU AI Act, Recital, Item (69).
  • [21] EU AI Act, Chap. II, Art. 5, para. 1 (e).
  • [22] U.S. House of Representatives, 2024. 
  • [23] Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish, “Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts,” paper presented at the ACM Conference on Fairness, Accountability, and Transparency, virtual event, March 3–10, 2021.
  • [24] Metcalf et al., 2021.
  • [25] Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker, Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute, April 2018.
  • [26] Reisman et al., 2018.
  • [27] U.S. House of Representatives, 2024. 
  • [28] GDPR, 2018.
  • [29] Van Bael & Bellis, “Fundamental Rights Impact Assessment in the EU Artificial Intelligence Act,” March 28, 2024.
  • [30] EU AI Act, Chap, III, Sec. 3, Art. 27, para. 1 (a) – (f).
  • [31] Olga V. Mack and Emili Budell-Rhodes, “Navigating the AI Audit: A Comprehensive Guide to Best Practices,” Law.com, October 20, 2023.
  • [32] Mark Dangelo, “Auditing AI: The Emerging Battlefield of Transparency and Assessment,” Thomson Reuters, October 25, 2023.
  • [33] Mack and Budell-Rhodes, 2023.
  • [34] Lynn Parker Dupree and Taryn Willett, “Seeking Synergy Between AI and Privacy Regulations,” Reuters, November 17, 2023. 
  • [35] Joe Davenport, Arlie Hilliard, and Ayesha Gulley, “What Is AI Auditing?” Holistic AI, December 21, 2022.
  • [36] Davenport, Hilliard, and Gulley, 2022.
  • [37] Davenport, Hilliard, and Gulley, 2022. 
  • [38] EU AI Act, Chap. III, Sec. 2, and Chap. III, Sec. 5, Art. 43.
  • [39] EU AI Act, Chap. III, Sec. 2. Art. 10.
  • [40] EU AI Act, Chap. III, Sec. 3, Art. 47, para. 1.
  • [41] Cameron F. Kerry, “How Privacy Legislation Can Help Address AI,” Brookings, July 7, 2023.
  • [42] EU AI Act, Chap. IV, Art. 50, paras 1-3.
  • [43] U.S. Senate, AI Labeling Act of 2023, Bill 2691, July 27, 2023. 
  • [44] Matt Bracken, “Bipartisan Senate Bill Targets Labels and Disclosures on AI Products,” FedScoop.com, October 25, 2023.
  • [45] EU AI Act, Recital, Item (116).
  • [46] EU AI Act, Recital, Items (138–139).
  • [47] EU AI Act, Recital, Item (139).
  • [48] “Can AI Regulations Keep Us Safe Without Stifling Innovation?” International Association of Privacy Professionals, July 12, 2023.
  • [49] “Can AI Regulations Keep Us Safe Without Stifling Innovation?” 2023.

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Beyond gene-edited babies: the possible paths for tinkering with human evolution – MIT Technology Review
NIST releases new draft of digital identity proofing guidelines – Nextgov →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Hype Correction – MIT Technology Review 12/15/2025
    • Semantic Collapse – NeurIPS 2025 12/12/2025
    • The arrhythmia of our current age – MIT Technology Review 12/11/2025
    • AI: The Metabolic Mirage 12/09/2025
    • When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists 12/05/2025
    • Why Digital Transformation—And AI—Demands Systems Thinking – Forbes 12/02/2025
    • How artificial intelligence impacts the US labor market – MIT Sloan 12/01/2025
    • Will quantum computing be chemistry’s next AI? 12/01/2025
    • Ontology is having its moment. 11/28/2025
    • Disconnected Systems Lead to Disconnected Care 11/26/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • December 2025 (8)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 154 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d