healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Congress unhappy with DoD, VA health records progress – Military Times

Posted by timmreardon on 04/16/2014
Posted in: Blue Button, DoD, Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, Innovation, Integrated Electronic Health Records, Military Health System Reform, National Health IT System, Patient Centered Medical Home, Patient Portals, Quadruple Aim, Quality Measures, U.S. Air Force Medicine, U.S. Army, U.S. Navy Medicine, Veterans Affairs, Warrior Transistion Units. Leave a comment

By Leo Shane III
Staff writer

Apr. 16, 2014 – 11:22AM

House lawmakers plan to hold back millions in dollars of technology funding from Defense and Veterans Affairs department planners until Congress is convinced they are making progress on developing a way to share electronic medical records.

At an April 9 hearing, members of the House Appropriations Committee approved a fiscal 2015 budget plan that would hold back 75 percent of VA’s requested record system upgrade funds, contingent on the two departments proving that they are close to a seamless medical record system for troops and veterans.

Rep. John Culberson, R-Texas, chair of the committee’s military construction and veterans affairs panel, said that similar language is planned for the defense appropriations and defense authorization bills set for May. Pentagon planners won’t get their full technology request until lawmakers are satisfied they’re addressing the shared records issue.

“If they want their money, they’re going to have to earn it,” he said.

Frustration with the military/veterans records systems has been rising on Capitol Hill since early 2013, when department leaders announced they would abandon plans for a single system that could track individuals from boot camp through their VA care. The price tag for that effort would have approached $30 billion.

But lawmakers noted that the departments had already spent more than $1 billion and several years on the joint system before changing plans, calling into question whether a seamless lifetime military medical record would ever be possible.

Defense and VA officials have repeatedly worked to assure Congress that the departments already are sharing significant amounts of medical information, including a common display format for basics like patient prescriptions, past physician visits and check-up information.

Pentagon officials have promised that having a separate records system from the VA won’t prevent them from sharing files seamlessly. Military medical officials are in the process of seeking proposals for a new multibillion-dollar records system, one that may include elements of the existing VA system.

Rep. Sam Farr, D-Calif., blamed most of the confusion surrounding the issue on the Pentagon’s “unwillingness” to adopt the established VA records system, but said he’s hopeful the withheld funding plan can force a change.

President Obama promised lifetime electronic medical records for service members back in 2009, as part of a host of promised reforms to veterans services. Rep. Sanford Bishop Jr., D-Ga., said that after years of frustration, he hopes the funding plans “have finally gotten the two departments’ attention, and I expect to see some real progress on this soon.”

The House budget proposal for VA would provide about $65 billion in discretionary funding in fiscal 2015, about $1.5 billion above this fiscal year but about $400 million less than what administration officials had requested.

That would include about $173 million in funding to continue work on the department’s Veterans Benefits Management System and $20 million more for digitizing veterans’ paper medical records.

Officials said those funds are needed to help VA stay on track with the plan to eliminate its disability benefits claims backlog in 2015.

Article link: http://www.militarytimes.com/article/20140416/NEWS05/304160047?utm_source=twitterfeed&utm_medium=twitter

ONC Report to Congress – UPDATE ON THE ADOPTION OF HEALTH INFORMATION TECHNOLOGY

Posted by timmreardon on 04/16/2014
Posted in: Blue Button, Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, HIPAA, Innovation, Integrated Electronic Health Records, National Health IT System, ONC Reports to Congress. Leave a comment

rtc_adoption_of_healthit_and_relatedefforts

ONC Report to Congress - UPDATE ON THE ADOPTION OF HEALTH INFORMATION TECHNOLOGY

PURPOSE OF REPORT
To accelerate the use of health information technology (IT), Congress passed and President Obama signed into law the Health Information Technology for Economic and Clinical Health (HITECH) Act as part of the American Recovery and Reinvestment Act of 2009. The HITECH Act authorized the Centers for Medicare & Medicaid Services (CMS) to provide financial incentives to eligible hospitals, Critical Access Hospitals (CAHs), and eligible professionals to adopt and meaningfully use certified electronic health record (EHR) technology to improve patient care.1 The HITECH Act also authorized the Office of the National Coordinator for Health Information Technology (ONC) to establish and administer programs to guide physicians, hospitals, and other key entities as they adopt and meaningfully use certified EHR technology as established in subsequent federal regulations.
Section 13113(a) of the American Recovery and Reinvestment Act of 2009 under Title XIII of Division A, part of the HITECH Act requires a report to be submitted to Congress no later than two years after the enactment of the law, and annually thereafter. The Secretary of Health and Human Services submitted the first report on January 17, 2012. This report is the annual update to the previous submission.
This report provides: (1) updates on the adoption of health IT; (2) efforts of CMS and ONC to facilitate nationwide adoption and exchange of electronic health information; and, (3) identification and discussion of barriers to the adoption and exchange of electronic clinical data and how ONC’s programs are addressing those barriers.

Despite Success, Future of RECs Uncertain – iHealthBeat

Posted by timmreardon on 04/16/2014
Posted in: Blue Button, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Innovation, Integrated Electronic Health Records, National Health IT System. Leave a comment

by Ken Terry, iHealthBeat Contributing ReporterTuesday, April 15, 2014

Article link: http://www.ihealthbeat.org/insight/2014/despite-success-future-of-recs-uncertain

The 62 health IT regional extension centers have far exceeded their goal of helping 100,000 providers in small primary care practices attest to the meaningful use of electronic health records. As of March 4, more than 150,000 providers had enrolled with RECs. Of those providers, 90% had gone live on their EHRs and more than 93,000 had demonstrated meaningful use, according to the Office of the National Coordinator of Health IT.

An ONC report to Congress in June 2013 pointed out that nearly half of the providers who received Medicaid EHR incentive payments and a fifth of those that got Medicare incentives had enrolled in RECs. Moreover, Medicare providers who worked with RECs were 2.3 times more likely to receive an EHR incentive payment than those who didn’t.

Based on these statistics, ONC views the REC program as a success. But it’s unclear where the RECs will go from here.

While federal funding for the RECs officially ended on April 15, ONC is allowing the RECs to request “no-cost,” one-year contract extensions that will enable them to use any remaining money in their budgets for specified purposes. At latest count, 55 RECs had asked for these extensions and 39 had received them, although other requests were still pending.

In addition, all of the RECs were required to submit “sustainability plans” before they received their federal grants. These business plans described how the RECs planned to support themselves after their government funds ran out. But some RECs have much more solid plans for the future than others.

How much help RECs across the country will be able to provide to physicians in meeting the goals of meaningful use stages 2 and 3 remains open to question.

“It’s like Stage 1,” said Steven Waldren, senior health care IT strategist at the American Academy of Family Physicians. “Some RECs will be able to do that effectively, and others will struggle.”

To help practices in the later stages of meaningful use, RECs will need a lot of expertise in areas like quality improvement and clinical workflow, Waldren noted. The RECs that are most likely to have those capabilities are those that were most successful in helping providers with meaningful use Stage 1.

Bill O’Byrne — director of NJ-HITEC, New Jersey’s REC — said, “Some of us are going to emerge from this current situation, and we’re going to be very significant players in helping providers become and stay meaningful users.”

RECs’ Track Records Vary

The track records of RECs across the country vary quite a bit, according to observers. Some RECs have worked hard to enroll physicians and to help them choose their EHRs and demonstrate meaningful use. Others have done little more than provide educational materials and conduct webinars.

“Some have not lived up to what I’d consider an acceptable level of performance,” O’Byrne said.

Several RECs have given AAFP members wrong information, Waldren said. Other family doctors have told him that their RECs “made it really easy for them to achieve meaningful use,” he said. RECs that have had trouble finding enough trained technicians “rely on more of a virtual assistance model,” he added.

Mark Anderson, a health IT consultant in Montgomery, Texas, said, “Overall, the REC program was probably good. But maybe 10 of the RECs were great, and 20 were useless.”

The real test of the RECs’ effectiveness, Anderson added, is not how many primary care physicians they enrolled, but how many of those doctors attested to meaningful use. That seems like a fair assumption: Some RECs have a far lower ratio between enrollees and meaningful users than the national average of 62%, according to ONC statistics.

On the other hand, HITEC-LA, the Los Angeles REC, has made significant progress in helping its enrollees in a challenging environment, although it has fallen short of its meaningful use goal.

To date, HITEC-LA has enrolled about 4,000 primary care physicians and 1,000 specialists. Of these, 4,000 have gone live on their EHRs and 2,500 PCPs have attested to meaningful use, short of the REC’s goal of 3,000.

Noting that 60% of the REC’s enrollees have large Medicaid practices, HITEC-LA Executive Director Mary Franz said, “We’ve seen the Medicare providers move forward in our population and sign up more aggressively for meaningful use because of the incentives and the penalties. In contrast, the Medicaid incentive program has a 2016 start and a 2021 end, so Medi-Cal providers are moving slower.”

(Eligible professionals who qualify for the Medicaid EHR incentive program don’t have to enroll in it until 2016. In the first year, they only have to adopt, implement or upgrade to a certified EHR to get an incentive payment; they have three years to attest to meaningful use.)

The REC has also had to cope with the fact that most primary care practices in Los Angeles have only one or two doctors. “Small practices and especially Medi-Cal small practices struggle because they don’t have a lot of money,” Franz said.

HITEC-LA Gearing Up for the Future

HITEC-LA has received a no-cost extension from ONC, allowing it to remain an REC until 2014. ONC has also given HITEC-LA a “scope of work change” that lets it use the money left in its budget to provide meaningful use Stage 2 support to doctors.

The $16 million REC grant for HITEC-LA went to L.A. Care, a Medicaid/Medicare managed care company that is HITEC-LA’s parent. (L.A. Care is one of two health plans nationally that received REC grants.) The insurer added 10% to the grant, bringing it up to nearly $18 million. The bulk of that money went to help providers qualify for $55 million in EHR incentives, including meaningful use payments and “adopt/implement/upgrade” payments to Medicaid providers.

As a result of efficient management, HITEC-LA has a substantial amount left in its kitty, Franz said. In addition, it has received $3.5 million in recent grants, including:

  • $1 million from L.A. Care to expand a “virtual communications capability” for a coordination of care initiative;
  • A $200,000 grant from the Blue Shield of California Foundation to advance safety-net innovations; and
  • A 3-year grant of $1.8 million from the Health Resources and Services Administration.

Under the HRSA grant, HITEC-LA will provide technical services to 27 federally qualified health centers to help them attest to meaningful use Stage 2 and set up patient-centered medical homes.

Partially because of its support from L.A. Care, HITEC-LA plans to be around for a while.

“There’s a real possibility we’ll continue forward in some capacity,” Franz said. “Right now, the HITEC-LA program is one of seven different programs that we [L.A. Care] have going.”

New Jersey REC Roars

NJ-HITEC, one of the more successful RECs, has big plans for the future. After enrolling 8,700 doctors, including specialists who were recruited under a separate contract with the New Jersey Medicaid program, the REC helped 5,900 physicians attest to meaningful use Stage 1, including 4,100 PCPs and 1,800 specialists.

NJ-HITEC applied for a no-cost extension and, at press time, O’Byrne expected the REC would receive it soon. But there isn’t much left of its original budget.

“We aggressively used the original funding, which was under $24 million,” O’Byrne explained. “We spent nearly all of that money. But we’re doing the no-cost extension because it will allow us to stay as a REC at least for another year, although I expect we’ll be in a business a lot longer than that.”

Since November 2013, NJ-HITEC has been charging physicians who wanted to continue receiving REC services $600 a year.

Waldren said he doubts many physicians across the country would be willing to pay for meaningful use Stage 2 support after receiving no-cost support from RECs for Stage 1. A recent ONC-funded report on the REC program said the same. But NJ-HITEC has signed up more than 1,000 paying members, of whom nearly 500 are PCPs, according O’Byrne.

In addition, O’Byrne said NJ-HITEC is making money from a data registry that providers can use to do quality reporting for both meaningful use and CMS’ Physicians Quality Reporting System. RECs in Ohio, California and Massachusetts are using the registry under “white label” partnerships with the New Jersey REC, he noted.

NJ-HITEC also has contracts with four accountable care organizations in the Garden State. On behalf of these ACOs, the REC is collecting data in doctor’s offices for quality reporting to CMS and is also working with the physicians and their staffs on practice transformation, showing them how to use the data to improve quality.

NJ-HITEC is also involved in CMS’ Comprehensive Primary Care Initiative , but so far that has been a bust, O’Byrne said. The REC was hired to advise CPCI practices on how to collect quality data, but it has not been asked to do that, he said.

Finally, NJ-HITEC is offering its members Direct secure messaging services, which will help them meet meaningful use stage 2 criteria. Having partnered with a health information service provider, the REC has compiled a directory of Direct addresses for all its members. That allows any user to send secure email and attachments through the NJ-HITEC portal to anybody else with an address in the REC’s directory. NJ-HITEC will not allow the exchange of Direct messages with non-members because they are not paying for the service, O’Byrne said.

Not many RECs could do what NJ-HITEC is doing, according to Waldren. Asked how many RECs will be able to survive in the long run, he replied, “I’d expect that it’s highly variable across the different RECs. Some have aligned themselves with [health information exchanges] or [quality improvement organizations] in their state, and I think those will continue to provide value. Others that are more isolated will struggle to sustain themselves. Until there’s significant payment change, the sustainability for practice transformation is going to be difficult.”

If Doctors Don’t Like Electronic Medical Records, Should We Care? – The Atlantic

Posted by timmreardon on 04/15/2014
Posted in: Blue Button, Emergency Medicine, Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, Innovation, Integrated Electronic Health Records, National Health IT System, Patient Centered Medical Home, PCMH. Leave a comment

James FallowsApr 14 2014, 1:21 PM ET

“Yes, there are problems in any technology implementation and there always will be. But fewer people die. Yes, it is important to connect with the patient. But fewer people die. Yes, the opportunity to pad billing is obscene. But fewer people die.”

Article link: http://www.theatlantic.com/technology/archive/2014/04/if-doctors-dont-like-electronic-medical-records-should-we-care/360618/

How critics imagine the new record-keeping system. ( Wikimedia commons )

Dr. David Blumenthal, who now is head of the Commonwealth Fund, has been a friend since we both were teenagers. It was a sign of his medical / tech / policy skills that the newly arrived Obama administration put him in charge of encouraging a shift toward use of electronic medical records. It is evidence of his admirably good-humored big-tent personality that David still takes my calls after the many rounds of back-and-forth we’ve posted here in response to his original Q&A in our April issue, about why the shift has been so difficult and taken so long.

For those joining us late, you can check out installments one, two, three,  four, and five. Herewith number six, on the particular question of how the non-expert public — those of us who experience the medical system mainly as patients and bill-payers — should assess the opinions of physicians, nurses, and other inside participants. Should we give them more weight, because of their first-hand expertise? Less weight, because of possible institutional bias or blind-spots? Both at once? See for yourself.

First, the concerns of two physicians. One on the West Coast writes:

I am a family practice physician in western Washington state. I have been practicing for 25 years.  Ten years ago I was excited about about the potential of electronic technology to improve patient care. Today I am profoundly disappointed.

I am currently working in three different EHRs (electronic health records). Two are OK, i.e. allow me to efficiently document a patient visit with clinically relevant data.  The other one is cumbersome beyond belief. It is a company with outstanding marketing capability that won over our administrators. It falls far short of meeting the needs of those of us trying to improve patient care.  Intrinsically it fails to produce a note useful for other doctors. To achieve that end, I use time-consuming work arounds. Sad I think.

I believe that primary care is valuable to patients but also has potential to limit costs…..

I have included a reference to one of my favorite articles from the New England Journal of Medicine, including the first paragraph of the article:

“It is a widely accepted myth that medicine requires complex, highly specialized information-technology (IT) systems. This myth continues to justify soaring IT costs, burdensome physician workloads, and stagnation in innovation — while doctors become increasingly bound to documentation and communication products that are functionally decades behind those they use in their ‘civilian’ life.

And from a doctor in Kentucky:

As a 50 y/o it infuriates me when I read that only physicians less than 40 are comfortable with EMR’s because they grew up with them. Well that’s crap. My first computer was a Commodore 64 which I learned to program. I am very familiar with computers and have 4 networked together in my home.

That being said I would agree with Dr. Wait [from this post] in that EMR’s are not ready for primetime. If EMR’s were so great, no one would have to bribe and penalize us to use less. They generate a tidal wave of information. The important data gets lost in the overwhelming volume of mostly useless information. I used to dictate my notes and they would then scanned into the computer. The note was legible and concise. I could find it anywhere. Then the EMR came. It takes 20 minutes to do what used to take 30 seconds. I get a note that is less than useful. It is full of errors that I can’t correct. Information that others have entered that is clearly wrong that I can’t remove. I no longer try. The only important part now of my notes are the HPI and the plan. The rest is just garbage.

To give you an example my EMR won’t let me enter a subtotal hysterectomy in the past surgical history. Even when I supply the correct CPT code the EMR calls this  a Total hysterectomy, which is not correct and can lead to errors in determining who needs a pap smear.

So EMR remain not ready for primetime. I’m not sure why I can’t continue to dictate and allow the transcriptionist to fill in the EMR. It would work so much better.

Now for a different view, from an informed non-expert. This reader, a physics professor at a university in the South, uses the distinctive phrase of the day to suggest that we apply a discount to complaints from today’s practitioners:

I’ve been reading the back and forth over electronic medical records. It seems the opposition comes, by and large, from doctors. Because why?

Because problems. There’s lots of smoke and mirrors about interconnectivity, about interacting with the computer instead of the patient, about sleazy increased billing but all of that is in service of a single point of view: let’s never change until we can change to something perfect. In other words, the underlying point is “don’t make me change the way I’m used to doing things.”

This all misses the main point. To me, what is overriding importance is the undeniable fact that ANY system that does NOT rely on the memory of the patient for long term medical history storage is NECESSARILY a better system no matter how badly it sucks. The VA has proved this over the last couple of decades as measured by the fact that fewer people die. Better information management beats clever doctoring every time.

Yes, there are problems in any technology implementation and there always will be. But fewer people die. Yes, it is important to connect with the patient. But fewer people die. Yes, the opportunity to pad billing is obscene. But fewer people die. Any large scale IT rollout has problems. The question is do the benefits outweigh the time invested in ironing out those problems. Most of us would say yes because fewer people die. I wonder why physicians are so reluctant to say that? Didn’t they swear an oath or something?

I also wonder how many of these physicians, when directing their gimlet eye to another field such as public education, are equally skeptical of, say, massive online courses or teachers attending to the computer instead of their students, or teaching to the test? I somehow doubt it.

I think when you are the person dealing with a system day after day, it is easy to let your detailed knowledge of its problems overwhelm the vaguer notion of its benefits. You don’t have a direct experience of a patient who didn’t die, but you do have a direct experience of a technical snafu. 

Thanks to experts and non-experts for writing in, and to David Blumenthal for opening this view into a world that affects us all.

Healthcare’s four-letter word? It’s ‘silo’ – mHealth News

Posted by timmreardon on 04/14/2014
Posted in: Blue Button, DoD, Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, Innovation, Integrated Electronic Health Records, Mobile Healthcare, National Health IT System, Patient Centered Medical Home, Patient Portals. Leave a comment

April 09, 2014 | Eric Wicklund – Editor, mHealthNews

Article link: http://www.mhealthnews.com/news/healthcares-four-letter-word-its-silo?single-page=true

To Patrick Soon-Shiong, “silo” is mHealth’s dirty little four-letter word.

Unfortunately, it’s everywhere. Hospitals, clinics and doctor’s offices all keep close tabs on their information – as do departments within the hospital or health network. Digital health devices and services silo their data, and let’s not even talk about EHR vendors. In short, everyone in healthcare these days has control over a certain subset of data, and they’re not into sharing.

And that, says Soon-Shiong, is what’s keeping healthcare – and mHealth in particular – from evolving.

“We need to create an integrated system that follows a human being through the continuum of life,” he said during a recent interview with mHealth News.

Soon-Shiong knows of what he speaks. A South African-born surgeon, medical researcher, businessman, philanthropist and UCLA professor who holds some 50 patents, he’s executive director of the UCLA Wireless Health Institute, a board member of the California Telehealth Network, chairman of the Chan Soon-Shiong Family Foundation and chairman and CEO of the Chan Soon-Shiong Institute for Advanced Health, the Healthcare Transformation Institute, National LambdaRail and NantWorks. NantWorks, in turn, is the parent company of NantHealth, which is pushing Soon-Shiong’s vision of integrated healthcare through a network of digital, genomic and clinical solutions.

[See also: 3mHealth startups that might make a difference.]

Soon-Shiong envisions a future healthcare system that connects all the dots, serving a patient throughout his or her life, not just in sickness. He sees that health journey much like a long plane trip, with the consumer and caretaker pulling in data from all sources to plot a course, and adjusting that course as things happen.

“What we are trying to create is a true operating system,” he said. Such a system would encompass clinical decision support, machine learning and “adaptive amplified intelligence” that “integrates pieces of the puzzle” and “gives you inputs … so that you can manage outputs.”

The EMR, he added, might be a part of that solution, but not the whole solution. He sees it as “basically a flight log” that needs to be tapped for information at times.

Soon-Shiong, long considered one of the most influential entrepreneurs in the healthcare field these days, said he saw some evidence of progress at this year’s HIMSS 14 conference and exhibition – though he also pointed out that he saw a lot of the same things he’s seen in the past six or seven conferences. Namely, he sees vendors showing off systems that work great, but don’t get along with each other.

He also saw a trend that isn’t a favorite of his: mHealth devices that over-emphasize the collection of vital signs and real-time transmission to healthcare providers. Healthcare, he said, “has to break the rule of capturing vital signs at all times” and focus more on gathering data and identifying trends.

Likewise, Soon-Shiong said healthcare isn’t being held back by technology. “The barriers technologically don’t exist any longer,” he said. Instead, healthcare is falling behind other industries like banking and entertainment because it isn’t using the technology properly – and that’s a workflow management problem.

“It’s a difficult concept to envision,” he said, “because nobody’s taken the trouble of taking each of these siloed pieces and integrating them into a single healthcare system.”

Soon-Shiong concluded that much of healthcare is resistant to wholesale change, and that fear should be used to our advantage.

“Change management is the next challenge,” he said.

Related articles: 

Pros and cons of Kaiser’s ambitious telehealth efforts

Buyers’ guide to mobile ICD-10 apps

3 trends shaping telehealth

Lack of 2014 Certified EHRs – EMR & HIPAA

Posted by timmreardon on 04/14/2014
Posted in: Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, HIPAA, Innovation, Integrated Electronic Health Records, National Health IT System, PCMH. Leave a comment

Article link: http://www.emrandhipaa.com/emr-and-hipaa/2014/04/11/lack-of-2014-certified-ehrs/

I was asked recently by an EHR vendor about the disconnect between the number of 2011 Certified EHR and the number of 2014 Certified EHR. I haven’t looked through the ONC-CHPL site recently, but you can easily run the number of certified EHR vendors there. Of course, there’s a major difference in the number of 2011 certified EHR versus 2014 certified EHR. However, I don’t think it’s for the reason most people give.

Every EHR vendor that gets 2014 Certified likes to proclaim that they’re one of the few EHR vendors that was “able” to get 2014 Certified. They like to point to the vast number of EHR that haven’t bridged from being 2011 Certified to being 2014 Certified as a sign that their company is special because they were able to complete the “more advanced” certification. While no one would argue that the 2014 Certification takes a lot more work, I think it’s misleading for EHR companies to proclaim themselves victor because they’re “one of the few” EHR vendors to be 2014 Certified.

First of all, there are over 1000 2014 Certified EHR products on ONC-CPHL as of today and hundreds of them (223 to be exact – 29 inpatient and 194 ambulatory) are even certified as complete EHR. Plus, I’ve heard from EHR vendors and certifying bodies that there’s often a delay in ONC putting the certified EHR up on ONC-CPHL. So, how many more are 2014 Certified that aren’t on the list…yet.

Another issue with this number is that there is still time for EHR vendors to finish their 2014 EHR certification. Yes, we’re getting close, but no doubt we’ll see a wave of last minute EHR certifications from EHR vendors. It’s kind of like many of you reading this that are sitting on your taxes and we’ll have a rush of tax filings in the next few days. It’s not a perfect comparison since EHR certification is more complex and there are a limited number of EHR Certification slots from the ONC-ATCB’s, but be sure there are some waiting until the last minute.

It’s also worth considering that I saw one report that talked about the hundreds (or it might have been thousands) of 2011 Certified EHR that never actually had any doctors attest using their software. If none of your users actually attested using your EHR software, then would it make any business sense to go after the 2014 EHR certification? We can be sure those will drop out, but I expect that a large majority of these aren’t really “EHR” software in the true sense. They’re likely modularly certified and add-ons to EHR software.

To date, I only know of one EHR software that’s comes out and shunned 2014 Certified EHR status. I’m sure we’ll see more than just this one before the deadline, but my guess is that 90% of the market (ie. actual EHR users) already have 2014 Certified EHR software available to them and 99% of the market will have 2014 certified EHR available if they want by the deadline.

I don’t think 2014 EHR certification is going to be a differentiating factor for any of the major EHR players. All the major players realize that being 2014 Certified is essential to their livelihood and a cost of doing business.

Of course, the same can’t be said for doctors. There are plenty of ways for doctors to stay in business while shunning 2014 Certified EHR software and meaningful use stage 2. I’m still really interested to see how that plays out.

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 15 blogs containing almost 5000 articles with John having written over 2000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 9.3 million times. John also recently launched two new companies: InfluentialNetworks.com and Physia.com, and is an advisor to docBeat. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and Google Plus. Healthcare Scene can be found on Google+ as well.

Related Posts

  1. Exposing the Jabba the Hutt EHRs and Finding the Han Solo EHRs
  2. CCHIT Certified EHR Becoming ARRA Certified EHR
  3. Preliminary ARRA Certified and CCHIT Certified
  4. Switching EHRs Mid-Stream – Meaningful Use Monday
  5. Six 2014 Healthcare IT, EMR, and HIPAA Predictions

Officials: Military Medicine Must Support Readiness – American Forces Press Service

Posted by timmreardon on 04/09/2014
Posted in: Blue Button, DoD, Emergency Medicine, Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, Innovation, Integrated Electronic Health Records, Military Health System Reform, National Health IT System, Patient Centered Medical Home, PCMH, Quadruple Aim, Quality Measures, U.S. Air Force Medicine, U.S. Army, U.S. Navy Medicine, Warrior Transistion Units. Leave a comment

By Terri Moon Cronk
American Forces Press Service at http://www.defense.gov/news/newsarticle.aspx?id=121998

WASHINGTON, April 7, 2014 – U.S. national security and defense strategies must be supported by a strong, forward-leaning Military Health System, the Defense Department’s top physician told Congress last week.

At an April 2 hearing of the House Appropriations Committee’s defense subcommittee, Dr. Jonathan Woodson, assistant secretary of defense for health affairs, said DOD’s request for fiscal year 2015 health program funding supports the department’s health care goals and the Military Health System’s “quadruple aim” of increased readiness, better health and better care at lower cost.

“We are committed to sustaining the medical readiness of our forces, the clinical skills of our medical forces and the world-class treatment and rehabilitation for those who fight battles today, yesterday and tomorrow,” Woodson said.

The Military Health System performed well in 13 years of war, achieving historic outcomes in reducing the rate of disease and nonbattle injury in combat, and increasing the rate of war-wound survival, he noted. And while he’s proud of those outcomes, he added, he and the service surgeons general have developed six lines of effort for the Military Health System to support Defense Secretary Chuck Hagel’s priorities and meet the health care mission amid changing threats and limited resources.

Those lines of effort, Woodson said, would:

— Modernize the Military Health System’s management with an enterprise focus;

— Define and resource the medical capabilities and manpower needed in the 21st century;

— Invest in and expand strategic partnerships;

— Assess the balance of the medical force structure;

— Modernize the TRICARE health program; and

— Define the Military Health System’s global health engagement requirements.

Focusing on two efforts that directly relate to the fiscal 2015 health program budget request, Woodson addressed the newly stood-up Defense Health Agency and modernization of the TRICARE program.

“The Defense Health Agency, a combat support agency, is an important first step in modernizing our common business and clinical practices with accountability for performances both to the assistant secretary of defense for health affairs and the chairman of the Joint Chiefs of Staff,” he said. “We have made substantial progress in achieving savings earlier than projected as we consolidated functions and we reduced redundancy.”

By modernizing and simplifying TRICARE, incentives would exist to increase beneficiaries’ wellness, decrease overuse and allow them to choose providers, he said. While the TRICARE proposal includes “modest increases” in beneficiaries’ out-of-pocket costs, he said, the high quality of care would not be affected.

“The TRICARE benefit will remain one of the most comprehensive benefits in this country, and it will modernize the program for the first time in many years,” Woodson said. “I believe this proposed budget meets the test.”

The service surgeons general also testified at the hearing.

“The health and the readiness of our Army are inseparable, because health is a critical enablement to readiness,” Lt. Gen. Patricia D. Horoho, Army surgeon general, told the House panel. “Today, we’re beginning to see results in readiness, in health and cost savings. Through our service lines and standardization of processes across the medical command, we have synchronized our policy, program and resources, and we’re starting to see some very strong results.”

But this is a time of “hard conversations and very tough choices,” she noted.

“For the first time, we are decreasing the size of our Army before the longest war in our nation’s history has ended,” Horoho said. “We are poised to transition to the interwar years, and we must work aggressively to sustain our combat-care skills, nurture an environment of dignity and respect, and maintain trust with the American people.”

Adding that today is a time of challenge and opportunity, she said the nature of war always will create medical threats.

“Our job is to be ready whenever and wherever,” the Army surgeon general said. “Anything less will cost lives, and this is not going to happen on my watch. We live in uncertain times. One thing is certain: a healthy, resilient and ready Army will be — as it always has been — the strength of our nation.”

Similarly, Navy Vice Adm. (Dr.) Matthew L. Nathan, Navy surgeon general, said Navy medicine “is mission-ready in delivering world-class care anywhere, any time.” By supporting the operational missions of the Navy and Marine Corps, Navy medicine must be an agile, expeditionary medical force that is “capable of meeting the demands of crisis response in global maritime security, he told committee members,” Nathan said.

“These are transformational times in military medicine,” he noted. “There is much work ahead as we navigate the important challenges … to keep our sailors and Marines healthy, maximize the value for all our patients, and leverage our joint opportunities. I am encouraged with the progress we have made. I am not yet satisfied.”

The Navy continues to look for ways to improve and remain on the forefront of delivering world-class health care anywhere and at any time, Nathan said.

Lt. Gen. (Dr.) Thomas W. Travis, Air Force surgeon general, told the panel that an eye on the future is essential. “With this war winding down, even with our fiscal challenges, we now have a clear responsibility to make sure our military medics are well-trained and well-prepared for whatever contingency the future brings, to include combat operations, stability operations, humanitarian assistance or disaster relief,” he said.

To enhance Air Force core competency in air evacuation missions, medical providers must continue to have “robust opportunities to practice their skills,” he added, “and [we must] continue to pursue critical research and modernization initiatives for the future.”

As the way the nation fights wars evolves, the way medical support for operators is provided must also evolve, Travis said.

“Airmen who are manning systems such as distributed common ground stations, space and cyber operations or remotely piloted aircraft, and those who operate outside the wire, such as security forces, special ops and explosive ordnance disposal specialists, all face distinct challenges,” he said. “The types of injuries or stressors, both visible and invisible, to members and their families are also changing.”

Medical support must be provided in different ways than in the past to address “an expanding definition of operator,” and military medicine must step up to its role as human performance practitioners, he added. “Not only will access in care be more customized for the mission, but so will prevention,” he said.

Travis said he’s never seen a time when it’s been more evident how important military medicine is to the nation’s operational capability.

“We’ve learned much, and our medics have performed magnificently,” he told the panel. “Even in the face of budget challenges, we have to be as ready at the beginning of the next war as we are now with the end of the current war. I think our nation expects that.”

(Follow Terri Moon Cronk on Twitter: @MoonCronkAFPS)

Contact Author

Biographies:
Dr. Jonathan Woodson
Army Lt. Gen. Patricia D. Horoho
Navy Vice Adm. (Dr.) Matthew L. Nathan
Air Force Lt. Gen. (Dr.) Thomas W. Travis

Related Sites:
Military Health System
Defense Health Agency
TRICARE
Fiscal Year 2015 TRICARE Budget Proposal

Fatal Error – The Healthcare Blog

Posted by timmreardon on 04/08/2014
Posted in: Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, Innovation, Integrated Electronic Health Records, National Health IT System, Patient Centered Medical Home, Patient Portals, PCMH. Leave a comment

By Rob Lamberts, MD
Article link: http://thehealthcareblog.com/blog/2014/04/07/fatal-error/

Fatal Error

The janitor approached my office manager with a very worried expression.  ”Uh, Brenda…” he said, hesitantly.

“Yes?” she replied, wondering what janitorial emergency was looming in her near future.

“Uh…well…I was cleaning Dr. Lamberts’ office yesterday and I noticed on his computer….”  He cleared his throat nervously, “Uh…his computer had something on it.”

“Something on his computer? You mean on top of the computer, or on the screen?” she asked, growing more curious.

“On the screen.  It said something about an ‘illegal operation.’  I was worried that he had done something illegal and thought you should know,” he finished rapidly, seeming grateful that this huge weight lifted.

Relieved, Brenda laughed out loud, reassuring him that this “illegal operation” was not the kind of thing that would warrant police intervention.

Unfortunately for me, these “illegal operation” errors weren’t without consequence.  It turned out that our system had something wrong at its core, eventually causing our entire computer network to crash, giving us no access to patient records for several days.

The reality of computer errors is that the deeper the error is — the closer it is to the core of the operating system — the wider the consequences when it causes trouble.  That’s when the “blue screen of death” or (on a mac) the “beach ball of death” show up on our screens.  That’s when the “illegal operation” progresses to a “fatal error.”

The Fatal Error in Health Care 

Yeah, this makes me nervous too.

We have such an error in our health care system.  It’s absolutely central to nearly all care that is given, at the very heart of the operating system.  It’s a problem that increased access to care won’t fix, that repealing the SGR, or forestalling ICD-10 won’t help.

It’s a problem with something that is starts at the very beginning of health care itself.

The health care system is not about health.

Yes, the first word, “health” is inaccurate.  Our system is built to address the opposite of health, sickness, exchanging money for addressing illness.  The clinician is paid for matching diagnosis with procedure (ICD for CPT, in code).  Economically, more (or more serious) diagnoses and more (or more complex) procedures result in more pay.

Last I checked, more/more serious diagnoses and more/more complex procedures are not in the definition of “health.”

So is this just a case of bad nomenclature, or not wanting to use the term “sick care system” for PR reasons?  What does it matter what it’s called?  The problem is that health is what the patient wants (although it’s hard to call someone a “patient” if they are healthy), but the system does nothing to help people each this goal.

In fact, our system (as constructed) seems to be designed to discouraging providers from helping people toward the goal of health.  After all, the system itself becomes unnecessary in the presence of health.

Getting What We Pay For

So what do you get from such a backward system, one that rewards the outcomes people are supposed to avoid?  You get what you pay for:

  • A premium is placed on making diagnoses, since they are rewarded.
    • Unnecessary tests are done to “fish” for problems to treat.  I got my vitamin D level drawn at my last doctor’s visit, but was not displaying any symptoms/signs of a deficiency and know of no evidence that treating it in someone like me would do any good.  To what end do I have this diagnosis?  I am not sure.
    • New diseases are created to promote intervention.  ”Low T” syndrome is a perfect example of this, not only rewarding the provider by adding complexity for the visit and the lab for the test run to make the diagnosis, but also the drug company who brought the “disease” to the public consciousness.
  • The likelihood of a person being considered “healthy” is much less.
    • Obesity, depression, poor attention at school, social maladjustment – things that used to be considered different points along the range of normal human existence – are now classified as diseases.  Risk factors, such as high cholesterol, are made in diseases to be treated.  The end result is a diagnosis for everyone.
    • Overdiagnosis leads to overtreatment with medications that themselves can cause problems (which is rewarded by increased pay for doctors, hospitals, drug companies, etc).
  • Little effort is made to do things that would lead to health.
    • Spending more time/resources on people to educate them about their health is bad business, as it decreases the number of diagnoses and procedures a clinician can do in the course of the day.
    • Since there is no motivation to prevent little problems from becoming big ones, they tend to be neglected.  Patients often report the need to be “sick enough” to go to the doctor’s office, and seem embarrassed when their concerns are found to be “nothing serious.”

Why Payors Won’t Change

So why don’t payors just stop paying for unnecessary medications, tests, and procedures for invented diagnoses?  They did once, actually.Back in the early days of HMO’s, when most doctors and patients were used to getting any medication, test, and procedure without question, the payors changed: they stopped paying for everything.

“No, sir, you don’t need an MRI scan for back pain.”  ”No, ma’am, you don’t need the brand name drug that costs 20 times more.”This attempt to control cost was not met with praise, but instead by the demonization of payors by both doctors and patients.

Insurance companies quickly became public enemy #1, said to be denying care to those in need.

In reality, they were not denying care; they were simply refusing to pay for it.  Patients could get the MRI or brand medicine if they wanted, they’d just have to pay for it themselves.  But that wasn’t in the discussion.

In the end, they did what every God-fearing person does with a problem they don’t want: they passed the buck.  Instead of refusing to pay for unnecessary procedures, they did two things:

  1. Required authorization by providers – this meant that the denial was because of the provider’s inability to justify it, not the payor’s unwillingness to pay for it.
  2. Started penalizing/reporting “bad” providers – this started with the use of “pay for performance,” and has come to full fruition recently by the “transparency” movement, where doctors’ and hospitals’ utilization are publicly reported.

The analogy I’ve used in the past is that of an alcoholic who blames their spouse for their inability to control their drinking.  ”If only those damn doctors would stop ordering those unnecessary tests and prescribing those unnecessary drugs, I wouldn’t have the need to irresponsibly pay for them.”

Rethinking Reform

The root financial arrangement in the health care system is to promote more: more diagnosis, more disease, more tests, more interventions, and more medications, with each of these being rewarded with more revenue.  It seems the obvious cause of our out-of-control spending – spending which does not yield better health.

Attempts to reform the system have ignored this root problem, instead focusing on other things:

  • Improving access to care (a la the ACA) – which addresses the real problem of uninsured/underinsured people, but ignores the fact that care became inaccessible for a reason: it costs too much.
  • Measuring the care of providers and hospitals, attempting to manipulate them into reducing the cost of their care.  The HITECH act (and our old pal “meaningful use”) does this via computerizing and capturing the data of clinicians, as do the ACO’s (accountable care organizations) for hospital systems.  While there is a small shift of financial incentives in these arrangements, they greatly increase the complexity of the system, creating huge areas of spending that did not previously exist (yes, I am talking about the EMR companies, with Epic at their head).
  • Changing who is in charge – either by privatizing Medicare and Medicaid or by going to a single-payor system.  If a ship is sinking, the priority is to fix the hole, not to change captains.

Warning!  This is where I get on my soap box.

For any solution to have a real effect, this core problem must be addressed.  The basic incentive has to change from sickness to health.  Doctors need to be rewarded for preventing disease and treating it early. Rewards for unnecessary tests, procedures, and medications need to be minimized or eliminated.  This can only happen if it is financially beneficial to doctors for their patients to be healthy.

What a coincidence!  That’s what my new practice does!  Who’d have thought it? The healthier my patients are, the less of me they need and the larger my patient panel can get.  I am motivated to keep problems small, to avoid complexity, and to think in terms of true prevention rather than the invention of diseases.

Obviously, the system still must address the inevitable/unpreventable medical problems that arise despite my best efforts to prevent them.  This is where the high-deductible plans come in: covering problems that the patient cannot afford.

Yet my job will aways be to prevent patients from spending that deductible, wherever possible, avoiding unnecessary tests, medications, or ER visits.  Why?  Because in doing so I justify the monthly payment.  It turns out that this is not very hard.

This is a win/win/win, as patients are healthier, I make more money, and insurance companies don’t have to pay for nearly as much.

The Bottom Line

Any significant change, whatever the means, won’t happen until there is an even more basic shift, a shift in the very center of health care: we must focus again on people.  The patient (or the person trying to avoid becoming a patient) has moved from the center of the health care transaction and has become the raw-materials for what we call “health care.”

The doctor/hospital needs the patient to generate the codes necessary to be paid by the payor, which is the bottom-line reason for our problems.  A system that has incentives to create disease and procedures, will be satisfied (and even happy) with a lack of health.  But a system which rewards health will be radically different.

Changing the focus of care to this is more than just emotional idealism, it is good business.  Care should not be about codes, procedures, medications, tests, or interventions, but instead about helping people live their lives with as few problems as possible.  We need an economy that thrives when the patient costs the system less.

Any attempt to reform without this change will ultimately fail.

Rob Lamberts, MD (@doc_rob) is a primary care physician practicing somewhere in the southeastern United States. He blogs regularly at More Musings (of a Distractible Kind), where this post first appeared. 

Related Posts

  • The Death of SGR Reform
  • Bigger Hospitals Mean Bigger Hospitals with Higher Prices. Not Better Care.
  • Really Meaningful Use

Eight (No, Nine!) Problems With Big Data – NY Times

Posted by timmreardon on 04/08/2014
Posted in: Big Data, Global Standards, Health Care Costs, Health Care Economics, Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Integrated Electronic Health Records, National Health IT System, Open Data, Patient Centered Medical Home, Quality Measures. Leave a comment

By GARY MARCUS and ERNEST DAVISAPRIL 6, 2014

Article link: http://www.nytimes.com/2014/04/07/opinion/eight-no-nine-problems-with-big-data.html?_r=0

BIG data is suddenly everywhere. Everyone seems to be collecting it, analyzing it, making money from it and celebrating (or fearing) its powers. Whether we’re talking about analyzing zillions of Google search queries to predict flu outbreaks, or zillions of phone records to detect signs of terrorist activity, or zillions of airline stats to find the best time to buy plane tickets, big data is on the case. By combining the power of modern computing with the plentiful data of the digital era, it promises to solve virtually any problem — crime, public health, the evolution of grammar, the perils of dating — just by crunching the numbers.

Or so its champions allege. “In the next two decades,” the journalist Patrick Tucker writes in the latest big data manifesto, “The Naked Future,” “we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference.” Statistical correlations have never sounded so good.

Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. “Jeopardy!” champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.

The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.

Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.

Third, many tools that are based on big data can be easily gamed. For example, big data programs for grading student essays often rely on measures like sentence length and word sophistication, which are found to correlate well with the scores given by human graders. But once students figure out how such a program works, they start writing long sentences and using obscure words, rather than learning how to actually formulate and write clear, coherent text. Even Google’s celebrated search engine, rightly seen as a big data success story, is not immune to “Google bombing” and “spamdexing,” wily techniques for artificially elevating website search placement.

Fourth, even when the results of a big data analysis aren’t intentionally gamed, they often turn out to be less robust than they initially seem. Consider Google Flu Trends, once the poster child for big data. In 2009, Google reported — to considerable fanfare — that by analyzing flu-related search queries, it had been able to detect the spread of the flu as accurately and more quickly than the Centers for Disease Control and Prevention. A few years later, though, Google Flu Trends began to falter; for the last two years it has made more bad predictions than good ones.

As a recent article in the journal Science explained, one major contributing cause of the failures of Google Flu Trends may have been that the Google search engine itself constantly changes, such that patterns in data collected at one time do not necessarily apply to data collected at another time. As the statistician Kaiser Fung has noted, collections of big data that rely on web hits often merge data that was collected in different ways and with different purposes — sometimes to ill effect. It can be risky to draw conclusions from data sets of this kind.

A fifth concern might be called the echo-chamber effect, which also stems from the fact that much of big data comes from the web. Whenever the source of information for a big data analysis is itself a product of big data, opportunities for vicious cycles abound. Consider translation programs like Google Translate, which draw on many pairs of parallel texts from different languages — for example, the same Wikipedia entry in two different languages — to discern the patterns of translation between those languages. This is a perfectly reasonable strategy, except for the fact that with some of the less common languages, many of the Wikipedia articles themselves may have been written using Google Translate. In those cases, any initial errors in Google Translate infect Wikipedia, which is fed back into Google Translate, reinforcing the error.

A sixth worry is the risk of too many correlations. If you look 100 times for correlations between two variables, you risk finding, purely by chance, about five bogus correlations that appear statistically significant — even though there is no actual meaningful connection between the variables. Absent careful supervision, the magnitudes of big data can greatly amplify such errors.

Seventh, big data is prone to giving scientific-sounding solutions to hopelessly imprecise questions. In the past few months, for instance, there have been two separate attempts to rank people in terms of their “historical importance” or “cultural contributions,” based on data drawn from Wikipedia. One is the book “Who’s Bigger? Where Historical Figures Really Rank,” by the computer scientist Steven Skiena and the engineer Charles Ward. The other is an M.I.T. Media Lab project called Pantheon.

Both efforts get many things right — Jesus, Lincoln and Shakespeare were surely important people — but both also make some egregious errors. “Who’s Bigger?” claims that Francis Scott Key was the 19th most important poet in history; Pantheon has claimed that Nostradamus was the 20th most important writer in history, well ahead of Jane Austen (78th) and George Eliot (380th). Worse, both projects suggest a misleading degree of scientific precision with evaluations that are inherently vague, or even meaningless. Big data can reduce anything to a single number, but you shouldn’t be fooled by the appearance of exactitude.

FINALLY, big data is at its best when analyzing things that are extremely common, but often falls short when analyzing things that are less common. For instance, programs that use big data to deal with text, such as search engines and translation programs, often rely heavily on something called trigrams: sequences of three words in a row (like “in a row”). Reliable statistical information can be compiled about common trigrams, precisely because they appear frequently. But no existing body of data will ever be large enough to include all the trigrams that people might use, because of the continuing inventiveness of language.

Continue reading the main story Write A Comment

To select an example more or less at random, a book review that the actor Rob Lowe recently wrote for this newspaper contained nine trigrams such as “dumbed-down escapist fare” that had never before appeared anywhere in all the petabytes of text indexed by Google. To witness the limitations that big data can have with novelty, Google-translate “dumbed-down escapist fare” into German and then back into English: out comes the incoherent “scaled-flight fare.” That is a long way from what Mr. Lowe intended — and from big data’s aspirations for translation.

Wait, we almost forgot one last problem: the hype. Champions of big data promote it as a revolutionary advance. But even the examples that people give of the successes of big data, like Google Flu Trends, though useful, are small potatoes in the larger scheme of things. They are far less important than the great innovations of the 19th and 20th centuries, like antibiotics, automobiles and the airplane.

Big data is here to stay, as it should be. But let’s be realistic: It’s an important resource for anyone analyzing data, not a silver bullet.

Gary Marcus is a professor of psychology at New York University and an editor of the forthcoming book “The Future of the Brain.” Ernest Davis is a professor of computer science at New York University.

A version of this op-ed appears in print on April 7, 2014, on page A23 of the New York edition with the headline: Eight (No, Nine!) Problems With Big Data. Order Reprints|Today’s Paper|Subscribe

Why a Deterministic not Probablistic Patient ID is needed – matching patient EHR data is not just a Consumer Catalog delivery mistake it can be life threatening

Posted by timmreardon on 04/08/2014
Posted in: Health IT adoption, Health Outcomes, Healthcare Delivery, Healthcare Informatics, Healthcare Security, Integrated Electronic Health Records, National Health IT System, Patient Centered Medical Home, Patient Portals. Tagged: Patient Identity Matching. Leave a comment

The Science and Art of Customer Matching for MDM

Article link: http://www.information-management.com/news/the-science-and-art-of-customer-matching-for-mdm-10025560-1.html?zkPrintable=1&nopagination=1

What do a consumer catalog in someone else’s name delivered to your door, a spelling error and mathematical set theory have in common? The answer is that all are factors affecting the quality of a customer master data management system. It should go without saying that a critical success factor of any business is to know its customer. The proliferation of enterprise software and the need to link data from different systems has placed a renewed spotlight on MDM solutions and, more specifically, customer MDM solutions.

The best customer MDM systems do not exist in a vacuum. They are continually updated with the latest and greatest data available, whether that be from a customer change request, an internal CRM system or a partner data feed. But in order for this data to be meaningful and accurate, it must be integrated with existing data so as not to create duplicates or apply updates to the wrong record. The challenge with customer MDM is that names are not unique. In addition, persons may change their name and customers may shift addresses. In the absence of a unique identifier (e.g., Social Security number), what mechanisms exist for matching this type of data? How can the savvy IT professional be certain that customer additions are genuine and not just a clever duplicate?

Fortunately, principles from the math and science fields have been applied to data matching. Data matching is one of those concepts the human brain finds astonishingly easy, but is actually quite difficult for a computer program to reproduce. Consider the records in Figure 1:

Are the values for name, address and ZIP Code exactly the same? Definitely not. But could these records be referring to the same customer? Most certainly. This is the essence of “fuzzy matching”: searching for record linkages when two values are not 100 percent identical.

Fuzzy Matching Toolbox

Many database systems and open source APIs offer a range of comparison tools to perform fuzzy matching. These tools compare two data elements and return a value (or score) that indicates the confidence or likelihood the two values are identical. The best fuzzy matching comparison functions account for variances in spelling or form such that minor variances are still rated as a likely match but major variances flag a low match probability.

Let us take a brief look at the major fuzzy matching algorithms:

Soundex / Metaphone — Soundex is a phonetic matching algorithm developed in the early 20th century, used in the United States Census to group together surnames that sound the same in English but are spelled differently. Soundex classifies words by the first letter of the word and assigns a three digit code for the remaining consonants. For instance, “Smith,” “Smithe” and “Smyth” are all represented by the code “S530.” Soundex was a good first step at fuzzy matching of names, but it had its weaknesses. Future variations of phonetic matching, most notably Metaphone, sought to extend the capability of Soundex by accounting for more variations in English spellings, pronunciations and foreign words. Later versions of Metaphone have been used in spell check programs and search engine tools.

Levenshtein — The Levenshtein distance algorithm, named after its founder Vladimir Levenshtein, seeks to provide a mathematical representation of the similarity (or lack thereof) between two string values by computing the minimum number single character changes required to change one value into another. For example, the distance between the words “cat” and “hats” is represented by one character substitution (c for h) and one character addition (s).

Jaccard — Often referred to as the Jaccard similarity coefficient, this mathematical equation computes the similarity of data sets by measuring the size of the intersection divided by the size of the union. A complementary measurement to the Jaccard similarity coefficient is the Jaccard distance, which computes the dissimilarity between two values (by subtracting the similarity coefficient from 1).

Longest Common Sequence — For a string value, a subsequence is a combination of characters (or character placeholders) that appears in left to right order. The characters do not necessarily have to appear in consecutive order. Some subsequences of the word DATA are DAT, ATA, A_T, __T. From these, the longest common subsequence is defined as the maximum length subsequence shared by two string values.

Trigram — The trigram model is another tool for the fuzzy matching of string values. Trigram is based on the n-gram principle (where n = 1,2,. . . .) of dividing string values into three character blocks and comparing the number of shared blocks between the values being compared. Trigram, unlike Soundex or Metaphone functions, is not phonetic and therefore is language agnostic.

Data Matching Process

Armed with the knowledge of these scientific algorithms, how does one go about implementing this into a coherent matching solution? A typical process flow is shown in Figure 2.

The first step is always to profile the data. Profiling will identify particular anomalies present in the data set as well as dictate the amount of data cleansing to be done. Data profiling will also determine which customer attributes will be used in the matching process. This is referred to as the selecting the match string. Typical candidates for a match string in a customer database include:

  • Name (first, middle, last, suffix and/or organization name)
  • Address (address, city, state, postal code)
  • Phone

The next step is to cleanse and standardize the data. Data cleansing is essential to eliminate as much “noise” as possible from the elements being matched. This noise can take the form of extra whitespace (De Bray versus DeBray), punctuation (O Connor versus O’Connor), or common abbreviations (Corp versus Corporation). Functions can be written to remove or correct name strings of these differences, thus increasing the likelihood of an accurate match.

Address information offers a great opportunity for data standardization. Cleansed addresses will provide an organization value in terms of better customer matching and increased operational efficiencies related to accurate customer visits and reduced mailing errors. Several data providers offer these services for relatively modest fees. Due to the possible variances present in address information, standardization is essential whenever location information is included as part of customer matching.

Once the match string has been determined and the data has been cleansed, the process of data matching can begin. Let us assume the match string elements are the following:

  • First name
  • Middle name
  • Last name
  • Suffix
  • Address line 1
  • ZIP Code

In order to begin matching, one must select the matching algorithm(s) to be used. The selection process should begin with a series of structured tests to determine which algorithms work best with the data at hand. The tests should be designed to verify the results of three groups of data samples: obvious matches, obvious non-matches, and data that could go either way. With enough samples and testing, it should become apparent which algorithm gives the best results for identifying positive matches and negative matches as such. Figure 3 offers simple examples of matching algorithm comparisons.

Click here to see a larger image of Figure 3.

In conjunction with selecting the matching algorithm, the data designer must also decide whether to assign weightings to the match string elements. Weightings are percentage values that give more or less emphasis to different elements of the match string; all must add up to 100 percent. Determining the weightings depends on the data. While it is typical to assign a higher weight to name attributes over address details, this may not be the best solution if the customers to be matched are concentrated in a particular geography. Weighting is particularly useful for data elements such as middle name or suffix. These values may not always be present, the accuracy of the match score is definitely improved when they are. Once weightings are set, they can be used as a multiplier of individual confidence scores in order to determine a final score.

The last step in the process is to apply data stewardship procedures to the customer matching process. Since the matching process deals with nuanced data, it makes sense to introduce a human element to serve as a check to the preprogrammed routines. Most of the matching algorithms discussed here output a confidence score, so it is simply a matter of choosing a middle ground of score ranges that should be reviewed by a data steward. For instance, let the system process match scores above 0.90 (high confidence) and below 0.70 (lower confidence); anything in the between should be reviewed to determine whether it is a genuine match or not. In this way, the data steward may gain insights into the data that can later be used to improve the matching process.

This is where the art of data matching comes into play. There are many mathematical tools and levers that one can pull when comparing two pieces of data, but understanding how and when to pull those levers is the art of the procedure. The data designer must have a feel for the data needing to be matched and understand how different weightings can affect the overall outcome. Should more weight be given to a certain element of a match string? Would a different matching algorithm give a better result? All of these are factors that the data designer must consider. In the end, the due diligence of profiling, cleansing, matching and stewardship will pay dividends in the form of a reliable and accurate customer MDM system.

Bill Faley is a senior consultant at CBIG Consulting, a professional services firm focused on business intelligence, big data analytics, and data warehousing solutions. Bill is based in Chicago, IL and is a graduate of the University of Notre Dame and the Liautaud Graduate School of Business at the University of Illinois at Chicago.

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • The Global Healthcare System Is Broken. Japan Fixed It for $4,100 Per Person. 04/10/2026
    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (2)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...