Select Committee on Health Written Evidence


Evidence submitted by the Foundation for Information Policy Resarch (EPR 61)

  The Foundation for Information Policy Research is an independent body that studies the interaction between information technology and society. Its goal is to identify technical developments with significant social impact, commission and undertake research into public policy alternatives, and promote public understanding and dialogue between technologists and policy-makers in the UK and Europe.

  We welcome the opportunity to comment on the electronic patient record.

THE USES OF CLINICAL RECORDS

  A useful starting point is "What information is already held?" Doctors maintain medical records for the primary purpose of treating their patients better. There are many secondary uses, such as scientific research and drug development, not all of which are supported by all patients. Patient surveys have consistently shown that while most support access to records by staff involved directly in their care, most oppose access to individually identifiable records by outsiders. [37][38] As such access has become pervasive over the past fifteen years, a gap has opened up between actual practice on the one hand, and the expectations and views of patients on the other.

  The Department of Health used copies of invoices from hospitals to health authorities to amass a database (now the NHS Secondary Uses Service, or SUS) containing summaries of all secondary care episodes. As many of the most sensitive procedures—from terminations of pregnancy to HIV tests—are performed in hospitals or hospital labs, SUS already contains a lot of highly sensitive material, and it is used for many tasks, from scientific research to answering parliamentary questions. There are further central databases for matters such as patient registrations and drug prescriptions. These are also very sensitive; prescriptions for antiretroviral drugs or even antidepressants can be stigmatising, while the Helen Wilkinson case was sparked by an erroneous record that Mrs Wilkinson was a patient of an alcohol abuse service.

  By the mid-1990s, secondary uses had got out of hand. The Caldicott Committee found dozens of illegal information flows in the NHS. For example, data on HIV/AIDS sufferers that had been supplied anonymously by hospices and other carers, but with patients' postcodes and dates of birth, were collated and re-identified by the Department. The government's response was the Health and Social Care Act 2000, which empowers the Secretary of State to regulate secondary uses of personal health information, a task now devolved to the Patient Information Advisory Group. This was presented to Parliament as a stopgap mechanism to keep the NHS legal while proper consent and anonymity mechanisms were developed; [39]they have not been, and instead the powers have become a convenient catch-all for ever more information sharing. Many more people have access to medical records than most patients realise, and information-sharing practices remain open to challenge under European law, according to which secondary uses of sensitive personal information require either consent or statutory provision—but not just any kind of statutory provision. Rather, European law requires that any statutory provisions allowing for such data processing must not be over-broad; people confiding in their doctor must be able to foresee what will happen to their data, and to object to (and in all but a few cases, thereby prevent) secondary uses of those data with which they disagree. Patients must be informed of any secondary uses so that these rights can be exercised effectively. [40]Legal challenges are likely as more people become aware of what is happening.

  Ever more contentious secondary uses are regularly introduced or proposed, ranging from the use of NHS registrations to track illegal immigrants to police monitoring of opiate prescriptions (which was introduced in 1996 but failed to detect Dr Shipman). The Home Office wants to use health information to identify children at risk of becoming delinquent. [41]There have already been serious abuses; in 2003, the Real IRA infiltrated the Royal Victoria Hospital in Belfast and used its records to target policemen and politicians for murder. [42]

  Tussles over access to medical records are not unique to the UK. In the USA, widespread medical privacy abuses led to the passage in 1996 of the Health Insurance Portability and Accountability Act; this was helped through Congress by a case in which a convicted child rapist working as a technician in a Massachussetts hospital phoned up over 900 women and girls, using knowledge of their health records to gain their confidence and ask for sex. [43]In Iceland, a proposal to create a national medical and genetic database resulted in 11% of the population opting out. Eventually the Icelandic Supreme Court found that it had to be opt-in rather than opt-out, and now the database contains the records of only about half the Icelandic population.

  The UK has so far failed to develop a robust political and legal mechanism for balancing patients' privacy interests with the many requests by others for access to their data. There has also been a settled view in the Department of Health that more sharing is better, while public opinion and the popular media have not yet engaged the issues (unlike in the USA and Iceland). There are signs that public engagement is now starting. Thus, if sharing medical data becomes entrenched as CfH wishes, serious disruption will be likely when patients say "enough".

  CfH assumes that all medical records will be kept on central machines, and this is not just a matter of hospital and practice archives. Anyone who attends hospital and needs an X-ray will have this stored at the LSP—and there is simply no provision for treating a patient who does not consent to people outside the hospital having access to her data. There are no longer mechanisms, whether physical film X-rays or local data storage, for medical images to be taken and used locally.

  While a few dissidents might be accommodated by the Department paying for them to be treated privately, or by ad-hoc mechanisms such as pseudonyms, an opt-out on a large scale—or, equivalently, a European-law case—will present an acute challenge.

ELECTRONIC PATIENT RECORD ARCHITECTURE

  Against the background of a struggle for access to and control of medical records, the electronic aspects are far from being value-neutral. The movement over the past 15 years to market the "electronic patient record" has generally seen its role as one of centralising records that are at present maintained by different health service providers.

  In the USA and continental Europe, initial enthusiasm for the EPR has calmed into a realisation that, in practice, individual records have to be maintained by the organisations that provide the care, and links provided to support appropriate information flows. This is what we already had in the UK: electronic patient records are kept by GPs, and separately by hospitals. The GP holds the lifetime record, which contains references (referral letters and discharge notes) linking to records of hospital care. This architecture naturally follows NHS working practices. It is less convenient for secondary uses; a researcher interested in a particular group of patients, or a civil servant researching the answer to a parliamentary question, may have to chase around numerous hospital and surgery systems. From their viewpoint, centralising everything would be convenient. But the main purpose of medical records is to support care rather than research; optimising access for researchers rather than carers is bad engineering and a waste of money.

  Unfortunately, the idea that everyone should have a single EPR, to be opened in the IVF clinic and archived on autopsy, was adopted in the 1990s as the vision for the NHS Information Management and Technology Strategy. NHS computing strategists have clung to this vision even as the rest of the world has moved on. The vision must be abandoned.

    1.  Clinicians want to use the best system for them—be it the best GP system, the best cardiology system, or the best diabetes management system. In a globalised competitive world, this means assembling systems from different suppliers and connecting them up, rather than a single one-system-fits-all approach.

    2.  Clinical systems have historically worked best when their functionality is driven directly by the demands of their users. A crude way of putting this is that "systems bought by doctors generally work and systems bought by civil servants generally don't." A GP who wants new functionality on (say) his practice system can call the sales engineer, or go along to the user-group conference and give a talk. Even so, GP system developers are perceived to have become less responsive over the past 10 years, as the Department of Health has interfered more and more in their design; clinical features have been crowded out by mechanisms sought for reporting and targets. There is wide concern about the consequences of having CfH doing all the purchasing in future.

    3.  Trust and control follow architecture. So long as patient records sit on PCs in doctors" surgeries, GPs retain some autonomy and patients some privacy. The move since 2004 towards "hosted systems"—where the records are removed from the surgery and placed on a server in a hosting centre—undermines this. No matter that the GP remains the data controller for now; once the data are "hosted" it is a simple matter for the Department to make the Chief Medical Officer the data controller. For this reason, back in 1996, the BMA adopted the principle that trust structures in e-health should reflect those in existing professional practice. (We'll discuss privacy in more detail in the next section.)

  The alternative to centralisation is interoperability; this is the approach being taken in Europe and North America, as shown in a recent study by FIPR for the National Audit Office. [44]The UK is an outlier, and is likely at some stage to be compelled to follow the rest of the developed world. (CfH is watched with appalled fascination by colleagues overseas.) The NHS has a long, sad history of failed attempts at autarky in IT; an example is the attempt in 1994-96 to standardise the NHS-wide network on X400 email protocols rather than using SMTP, which had already achieved dominance on the Internet. The effect was to delay by several years the uptake of email by GPs and (especially) hospitals. Similarly, we believe that one of the main reasons the Care Records Service is years late is that it is simply the wrong system to build. It is not how the rest of the world works, and it does not correspond to the actual requirements of healthcare providers.

PRIVACY, CONFIDENTIALITY AND SECURITY

  These terms are often (deliberately) confused. Privacy refers to the right patients have under data-protection and human-rights law to control information about themselves; confidentiality refers to a duty that people have to others as a result of an employment or other contract; and security refers to mechanisms that enforce particular policies on information flow. In an ideal world, the patient's right to privacy is upheld by the doctor's duty of confidentiality and both are supported by information security mechanisms. The Committee should be very wary if questions about patient privacy are answered by reference to information security mechanisms, such as using smartcards to stop outside "hackers" getting access to information. The threat to privacy comes from insiders. Someone wanting access to health information will typically phone up, pretend to bean insider, and find someone careless enough to disclose it.

  In 1996, an experiment was conducted to train staff at the North Yorkshire Health Authority to deal with this problem, by logging calls and calling back to numbers obtained from the phone book rather than the caller. This uncovered 30 false-pretext calls per week. [45]The BMA asked the Department of Health to introduce such operational-security measures throughout the NHS; their response was to order the NYHA to stop. Smartcards are not the answer to this problem.

  Various other technical security policies have been proposed from time to time by the Department of Health for a national EPR. Not one of them has been convincing. In 1995 the proposal was for a multi-level system of the type commonly used in the civil service, in which prescriptions would in effect be classified "Restricted", ordinary medical records at "Confidential" and HIV/AIDS data at "Secret". The BMA pointed out at the time that this was unserviceable; what about a prescription for AZT? The BMA's counterproposal was a compartmented security policy, with records kept by healthcare providers rather than centrally, and rules based on existing practice to regulate flows between them[46] This has been implemented in a number of hospital systems, but the Department opposed it as inimical to its centralising vision.

  The Department's current computer security proposals are little changed. Multilevel security has been replaced by role-based access control, and there is a promise that access by care-delivery staff will be restricted on the basis of legitimate patient relationships. But it is not clear how this will be implemented. The Department's historic lack of concern with patient privacy (as seen in the NHYA incident) does not give confidence that effective controls will be developed; neither does the Cabinet Office's "e-Government framework for Information Assurance" [47] which seeks to reduce the protection given to personal information in the public sector[48] Yet the fact remains that aggregating large quantities of sensitive information, to which more and more people then need access in order to do their jobs, simultaneously increases the value of the target and the number of people through whose carelessness or disloyalty it can be compromised.

  The department has also proposed "Sealed Envelopes" as a mechanism for patients to restrict access to particularly sensitive information. There are several problems. First, although they would bar access to some clinical staff (who could override them if thought necessary) the envelopes will not bar access to the secondary users—the very people whom most patients believe should have no access. Second, sealed-envelope systems have not been built, and it is not clear that they can be. A recent report from the Department's own consultants concluded that sealed envelopes would not work as well as local data storage[49] It is disturbing that CfH proposes to roll out a nationwide medical-records system without proper confidentiality safeguards—merely an assurance that some safeguards will be developed eventually.

  Local, or compartmented, systems are widely used for data that governments really do want to keep confidential, such as defence and intelligence information. Exactly the same considerations apply to information whose disclosure would harm private individuals as where disclosure could harm soldiers or ministers. It is a principle of security engineering that we can build systems with functionality, scale or security—or indeed with any two of these attributes, but not all three. Secure and highly functional systems have to be local, or compartmented.

  There are grave problems with records of especially sensitive matters, such as sexually transmitted diseases and psychiatric care[50] In the world of paper records, a psychiatrist simply keeps his files locked up, but if there is to be a single womb-to-tomb record, then his notes must be kept there—and what sort of assurance can be got that the large numbers of secondary users of health information will not leak information and cause harm? No-one knows how to write a security policy for a lifelong EPR—there are just too many complications (these were explored in some detail when the BMA developed its security policy in 1996).

  Finally, the Committee asks how medical data should be used for research. The short answer is "within the law'. There are researchers who believe they should have a right to know everything about everyone with the disease in which they are interested (cancer specialists argued this during the passage of the Health and Social Care Act 2000). But that is not consistent with European law or with patients' views. Where records can be effectively anonymized, well and good; but in the many cases where they cannot, consent matters. The committee should consider two important precedents: how the cavalier attitude to laboratory animal welfare in the 1970s spawned the Animal Liberation Front; and how disdain for consent in the context of human tissue led to the damaging Alder Hey scandal. If contempt for medical privacy causes widespread withdrawal of consent, research will suffer even worse damage.

RISKS OF IT-DRIVEN BUSINESS CHANGE

  A further driver for centralisation is the perception that it is needed to "modernise" the NHS. It may be argued that central GP records make it simpler to discontinue an inefficient general practice and award the business instead to, say, a drop-in centre run by a supermarket chain. But if the Secretary of State wishes to move patient records from an unsatisfactory practice to a new one, she can already regulate for this. More generally, when organisations to try to change the way they operate by imposing IT systems, rather than by finding or constructing proper incentives to drive the change, poor outcomes are common. In the public sector there have been some spectacular failures, such as the London Ambulance Service disaster. There is a real risk of an even bigger CfH disaster.

CONCLUSIONS

  Electronic medical records already bring many benefits, via faster communications, better record availability, and reduced errors. However, the Committee should not confuse these benefits with the centralisation agenda.

  Centralisation is principally about power and control in the management of the health service. It is driven by the conflict between administrative convenience and professional autonomy. The Department seeks to resolve this conflict by controlling all information systems. The inevitable side-effects—mediocre systems and the destruction of patient privacy—would be severe. Patient trust in the medical profession will be undermined, and the Department would be vulnerable to challenges under European law. However, the strategy is not working, and is not likely to. It is time for it to be abandoned, and for CfH to return to providing the standards and infrastructure for interoperable systems, as its equivalents do elsewhere.

Professor Ross Anderson

Dr Ian Brown

Dr Fleur Fisher

Professor Douwe Korff

Foundation for Information Policy Research

15 March 2007



















37   Hassey GA, Wells M "Clinical Systems Security-Implementing the BMA Policy & Guidelines", in Personal Medical Information-Security, Engineering and Ethics, Springer 1997 pp 79¸94. Back

38   Singleton P ERDIP Evaluation Project N5-Patient Consent and Confidentiality Study Report. NHS Information Authority. May 2002. Back

39   FIPR Response to the Consultation on Proposals to use Section 60 Powers, Jan 30 200, at http://www.fipr.org/040130s60.html Back

40   Some of the provisions in the Data Protection Act 1998 (in particular, Schedule 3, para. 8) expressly override these principles, but in a way that contravenes the EC Directive on data protection and the European Convention on Human Rights-and thus the Human Rights Act. Back

41   Children's Databases-Safety and Privacy, FIPR Report for the Information Commissioner, November 2006; at http://www.fipr.org Back

42   "Dissident Operation" Uncovered", BBC News, July 3 2003, at http://news.bbc.co.uk/1/low/northern-ireland/3038852.stm Back

43   Brelis M, "Patients' Files Allegedly used for Obscene Calls", Boston Globe, Apr 11 1995; in comp.risks v 17 no 7. Back

44   Healthcare IT in Europe and North America, FIPR, National Audit Office, 2006. Back

45   Anderson RJ. Security Engineering-A Guide to Building Dependable Distributed Systems. Chapter 8. Wiley 2001; at www.cl.cam.ac.uk/~ja14/book.html Back

46   Anderson RJ. Security in Clinical Information Systems. BMA, 1996. Back

47   Draft 5.1, December 2006, at http://www.cabinetoffice.gov.uk/csia/consultation/ Back

48   Anderson RJ, Bohm N, Gladman G, Whitehouse P, Framework for Information Assurance, March 13 2007, www.fipr.org Back

49   Det Norske Veritas, Sealed Envelopes Risk Assessment Project, 29 Sep 2006, at www.nhsconfidentiality.org Back

50   NHS Confidentiality Consultation-FIPR response, last updated June 29 2005, at http://www.cl.cam.ac.uk/~rja14/fiprmedconf.html Back


 
previous page contents next page

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2007
Prepared 25 April 2007