If stakeholders are to act on and pay for the reported outcomes, these outcomes should be available, reliable and valid. Currently, however, there is an acknowledged lack of clear definitions, registration and handling procedures, and reporting guidelines. Data is often not gathered in a standardized manner, and there is no segregation of duties in data recording and reporting. Systems used for recording and reporting are typically unsophisticated and lack the kinds of double entry facility seen in the general ledger of financial accounts.
“In Germany, mortality figures are compiled by the government. Due to the missing data, however, nobody really uses this information.”
Ekkehard Schuler, Head of Quality Management
Helios Kliniken, Germany
Consequently, most publicly reported outcome data is still unreliable, especially when compared to the financial performance of healthcare organizations which have strong internal and external controls that assure the accuracy of data. In Transforming Healthcare: From Volume to Value, a KPMG in the US study into quality concluded: “There is no consistency and no assurance in the accuracy of information.”8 With few standards for registration, case-mix correction, data handling, indicator calculation and publication, and an absence of controls, any data published is not truly dependable.
In the rush to request data, governments, payers and regulators are often failing to question whether reports can be trusted. Indeed, there have been cases where data has been massaged to improve scores, such as in the Netherlands, where some hospitals’ reported breast cancer recurrence scores were lower than the numbers sent to the clinical registries. Such ‘gaming’ becomes noticeably more prevalent when professionals and providers question the relevance of particular reports.
KPMG in Australia’s Malcolm Lowe-Lauri feels that gaming is not the biggest concern: “The main problem is poor data and poor completion of records – with little or no punishment for such failings.”
Ekkehard Schuler, Head of Quality Management of Helios Kliniken, agrees: “In Germany, mortality figures are compiled by the government. Due to the missing data, however, nobody really uses this information.”
To counter such problems, regulators are carrying out independent, sometimes ad hoc checks on the reliability of reports. In the Netherlands, the Visible Care program (a government-run, initiative to stimulate public reporting from healthcare providers) has created a system of red, orange and green flags to indicate whether reported scores are valid and reliable.
Internal and external auditors are also frequently asked to assess the accuracy and completeness of reporting, drawing on their extensive experience with financial reports. Their efforts are aided by the rapid growth in literature on quality reporting, with regulation in Canada, the UK, Portugal, the US and elsewhere creating new requirements for data assurance.
In the UK, all providers must publish an annual set of public Quality Accounts that is independently checked, with a director’s statement confirming balance and accuracy. To meet international auditing standards, such a confirmation requires auditors to look at the design of data systems, walk through the operations, identifying and checking audit trails, verifying the existence of proper internal controls, and performing sample tests to assure accuracy.
As Neil Thomas, an audit partner with KPMG in the UK, comments: “This involves a deep dive into the surrounding data and reports, to ask questions such as: “what was reported? Were all serious patient complaints and harm in the report? Were indepth investigations conducted on why the processes or outcomes of care failed?”
To satisfy the regulators’ scrutiny, the board and international auditors should be engaged early to help ensure that the report content has passed through sufficient reviews to reflect all aspects of performance before being subject to the external audit. Although quality data assurance is on the agenda in the US, Anthony Monaco, and advisory partner with KPMG in the US says that: “As yet, there is no standardized approach or clear external audit role.”
Providers will have to balance the need for assured data reliability with the resources required to achieve such a goal. One way to achieve greater efficiency is to concentrate on those outcome measures that matter most to patients. Smart use of IT can also help, with certified software making data gathering and reporting both faster and more accurate and reliable, enabling checks of calculation methods and inclusion and exclusion rules. Smart IT can thus help with the second and third step in reliable reporting. The first step, the moment of data entry itself, then becomes the remaining, key step where reliability is at stake, and further assurance may be required.
The BMJ Informatica Contract+ tool is used by UK general practitioners (GPs) to score quality points, which determine their pay-for-performance. The system signals when actions such as tests and other activities have to be undertaken, and quality ‘points’ can be earned by improving the quality of care. The system registers the points, adds the information to the electronic patient record, and generates internal reporting data, such as points totals, and guidelines on improving scores. With one click of a button, the points earned are submitted to the (and in principle accepted by) NHS.
'Meaningful Use' is a US incentive program to stimulate adoption of electronic health records (EHR). Providers receive funds when they prove they meaningfully use the EHR. This involves maintaining an active medication list for every patient; recording essential data items in a standardized way; keeping data secure; and calculating and submitting certain quality metrics in a standard manner. The software must pass standardized and partly automated stress tests, after which the certified software is included in a national register, releasing the incentive payments. Such certification helps to assure reliability of particular quality metrics, as they are all calculated and submitted in the same manner.
The most vital moment in data assurance is the point of data entry. At the bottom line, the professional or administrator entering the diagnosis, procedure code, or other piece of clinical information has to register this data reliably. Professionals or administrators can test reliability through a variety of methods: looking for unexpected statistical patterns; checking how many co-morbidities are registered (too few suggest improper coding); checking audit trails; enforcing separation of registration/reporting duties; and comparing data with other information entered elsewhere. By making adequate data entry a priority, organizations have a better chance of both producing meaningful outcomes to drive decision-making and satisfy regulators.