Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Friday, April 30, 2010

Experience of European legislator shows philosophical differences on privacy, and lack of trust in U.S. government

A story published this week in the New York Times highlights some of the key privacy concerns many Europeans have with U.S. data collection practices, particularly those followed under the justification of preventing terrorism. The article focuses on the experiences of European Parliament member Sophie In 't Veld, who became so frustrated at her inability to learn exactly what information U.S. government agencies were holding on her that she filed a lawsuit in federal court with the assistance of the Electronic Frontier Foundation. The lawsuit, naming both the Department of Homeland Security and Department of Justice as defendants, was dismissed after DHS asserted that it had performed an adequate search as In 't Veld requested (and as it is obligated to due under FOIA under which she sued), leaving the plaintiff in a situation where she believes (correctly or not) that there is more data about her on file within U.S. federal agencies than has been disclosed, and where the government isn't necessarily disagreeing, but basically says it provided enough information to comply with the request.

This case serves as perhaps the highest profile example of the practical impact of the different philosophical approaches in the U.S. and in Europe regarding the privacy of personal information. Such differences have led to the failure to reach agreements on financial information sharing intended to help combat terrorism by identifying its sources of funding. The collection and maintenance of airline passenger data for comparison to a variety of terrorist watchlists has historically been another sticking point between the U.S. and European Community countries, although the question at issue now is not so much that the data is being collected, but that individual who can presumably demonstrate that they are not terrorists have little or no visibility into the data being stored about them. U.S. authorities have consistently defended its anti-terrorism efforts since 9-11 and before, but in keeping with conventional "ask first" privacy practices that are the rule in Europe, Europeans believe that the U.S. should have to do more to prove that its data collection and use for anti-terrorism purposes are actually necessary, rather than individuals having to prove the practices cause them harm.

On its face, In 't Veld's desire to know what data the U.S. government has stored about her seems quite reasonable, not just because of her repeated experience of being selected for secondary security screening while traveling, but also because the ability for individual to find out what information is stored about them and how it is used is one of the core privacy principles embedded in all of the major privacy frameworks. This principle of access was articulated as one of the five fair information practices included in a landmark 1973 report from the Department of Health, Education, and Welfare entitled "Records, Computers and the Rights of Citizens" and was later reflected in U.S. legislation including the Privacy Act of 1974 and international privacy frameworks such as the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. It should be noted that neither of these important privacy drivers are relevant to In 't Veld and her requests to U.S. government agencies, as the OECD Guidelines are just that — guidelines, without the force of law — and the Privacy Act's provisions for records on individuals only applies to U.S. citizens and permanent resident aliens (5 U.S.C. §552a(a)(2)).

Thursday, April 29, 2010

Grand jury indicts man allegedly responsible for Las Vegas University Medical Center breach

In a follow-up to a HIPAA breach as Las Vegas' University Medical Center reported last November, the FBI investigation into the matter has resulted in an indictment of the UMC employee allegedly responsible for selling data about medical center patients to personal injury lawyers. The criminal case is being brought by federal prosecutors under the authority of protected health information provisions in HIPAA, and in accordance with the penalties for such violations, the accused could be put in jail for up to five years and fined as much as $250,000. The fact that the investigation has come to this may be slightly less surprising following the announcement this week of the first criminal prosecution under HIPAA to result in jail time, a milestone achieved by federal prosecutors in California. When the Las Vegas matter was first made public, there was speculation in the local media that UMC had little reason to be concerned about the breach, given the rarity of significant penalties resulting from HIPAA violations. The indictment would seem to suggest that HIPAA enforcement is in fact getting stronger since the passage of the HITECH Act. It remains to be seen if the hospital will suffer direct consequences from this incident, presumably based in part on whether anyone can show that UMC was aware or should have been aware of the actions of its employees. Other stories about the investigation have suggested that at least one local physician (not a UMC employee) knew that personal information on patients was being leaked. Both before and after the specific situation under investigation, UMC has had problems with privacy lapses and loss or theft of protected health information. Under the strengthened HIPAA enforcement provisions in the HITECH Act, both federal and state prosecutors would be able to bring civil or criminal action against the hospital, either on behalf of individual patients who suffered some harm due to the breaches, or because of the pattern of HIPAA violations that has emerged since the hospital came under closer scrutiny.

Wednesday, April 28, 2010

FTC planning to create Internet privacy framework

In the wake of privacy concerns expressed by four U.S. senators about Facebook's decision to change the way it shares user data with third parties, the Federal Trade Commission (FTC) announced it plans to create a regulatory framework of Internet privacy guidelines that would constrain data sharing practices among many types of online businesses, including social networking sites. In a press release posted on his official website, Senator Charles Schumer of New York urged the FTC to provide guidance to social networking sites to prevent the sort of changes in handling of personal information recently implemented by Facebook with the launch of several new services. Schumer seems particularly upset that Facebook now makes public some data that users may have previously kept private through the site's privacy settings, and did so without users' consent (there is an opt-out provision, but by default the data is now disclosed, regardless of whatever privacy settings had been in place previously). There are of course very few existing privacy regulations that come into play for social networking sites — aside from those like COPPA that govern personal data collection from children under 13 — particularly since the companies don't typically have commercial transactional relationships with their users. Schumer wants the FTC to take a close look at the privacy practices employed by Facebook and similar sites under its statutory authority to enforce unfair and deceptive trade practice rules, but he and the other senator are also advocating the development of new privacy regulations that would apply specifically to social networking sites. He went so far as to say if the FTC believes it lacks the authority to specify and enforce privacy practices of social network operators, he would "support them in obtaining the tools and authority to do just that."

New HITECH-driven privacy rules forthcoming from HHS

The Department of Health and Human Services announced its plans to propose a new set of rules strengthening privacy and security of personal health information protected. The rules will implement various provisions of the Health Information Technology for Clinical and Economic Health (HITECH) Act, which served to augment protections originally established under HIPAA. The forthcoming rules will make explicit several of the changes in the privacy portion of the HITECH Act (Subtitle D, §§ 13400-13410). The public notice announcing the intent to issue rules gives no details on what specific aspects of the law the rules will address, but based on a short note posted by HHS, the current focus seems to be on business associate liability for complying with HIPAA Privacy and Security Rule requirements; limits on the sale of protected health information; improve right of access by individuals to their health data; and new restrictions on personal data disclosure. Rules have already been released related to some of the other privacy provisions in this same section of the law, covering health data breach notification and stronger enforcement of HIPAA Privacy Rule violations, including a private right of action for individuals. The legal actions initiated against by the Connecticut Attorney General against HealthNet after its data breach were made possible by the enforcement rules. Looking at the text of the law in these areas, the new rules appear likely to cover the following:
  • The change in liability for business associates (§13401), which under HIPAA had no direct accountability for violations of the Privacy or Security rules (instead, the covered entity with which the business associate had a contractual agreement was liable for its business associates' violations). Now business associates are directly accountable for violations, including being subject to the civil and criminal penalties for violations that were also strengthened in HITECH.
  • Restrictions on the sale of protected health information, without explicit consent by the individual, subject to several exceptions (§13405(d)), notably including purposes of public health, research, treatment, health care operations, or situations such as providing an individual with a copy of his or her record (yes, they can charge you for that) or moving data between covered entities and business associates doing processing on behalf of the entity.
  • The requirement that individual be able to get a copy of whatever data a covered entity has stored electronically about the individual, and/or to direct that information to a designated entity (like a new doctor). (§13405(e)) This one might seem counter-intuitive, as most people believe that they own their own health record data, but that's just a privacy principle, not a legal right. This provision in HITECH doesn't resolve the data ownership question, but it does give you the right to request your data, and obligates the entity to give it to you; it also says any fee you are charged can't be more than the entity's labor cost to give the record to you.
  • New rules limiting the amount of data disclosed about an individual. (§13405(a)) This provision in the law has a couple of different aspects. First, there is a rule that says if you ask a covered entity (say, your doctor) not to disclose your personal health information, and you pay out of pocket for the services you receive from the entity, then the entity has to comply with your request not to disclose the data, unless the request to disclose is for treatment. Under HIPAA, disclosure for treatment, payment, or for the somewhat-vaguely-defined "health care operations" did not require the entity to get your consent or even to comply with your wishes about disclosure if you had expressed them. This rule changes that, except in cases of treatment. This section of the law also obligates an entity that disclosed protected health information to the minimum necessary for the purpose for which the data was requested. This means for example that someone should not disclose your whole medical record to someone asking for information about payment for a specific service you received. This part of the rules will be interesting to see because the determination of "minimum necessary" is left up to the entity doing the disclosing, and there really are no standards or guidelines on what the minimum data is for any of the anticipated purposes for use in health information exchange.

Friday, April 23, 2010

Patient-centric control over data is key to trust in EHRs, but managing consent is not that simple

Another interesting briefing coming out of the ONC Health IT Policy Committee meeting this week was a presentation from Privacy and Security Workgroup chair Deven McGraw, which highlighted the workgroup's current focus on privacy protections in health information exchanges, with particular emphasis on the question of how to handle consumer/patient preferences, consent, and control over use and disclosure of personal health data. While the workgroup is not ready to take a formal position on this issue, McGraw explained that they hope to present specific recommendations at the next Policy Committee meeting, currently scheduled for May 19. The workgroup's focus on privacy  and security from a patient-centric perspective appears to complement the five essential elements the NHIN Workgroup has proposed to constitute a trust framework that includes sufficient security and privacy provisions, oversight and enforcement, and technical capabilities to serve as an enabler of health information exchange (HIE). The NHIN Workgroup is focusing on trust as a prerequisite for HIE participants to realize the value of exchanging data, while the Privacy and Security Workgroup is looking at building trust among individuals, especially including providers and patients as well as the public in general.

While everyone is in violent agreement that a better foundation of trust is needed before the grand vision for health information exchange can be achieved, it shouldn't be lost on anyone that it is exceedingly difficult to arrive at a common framework of trust when different stakeholders have different goals and priorities for adopting electronic health records and exchanging the data those records contain. Many of the anticipated benefits from the interoperable electronic health records rely on widespread adoption of health information technology and universal participation among individuals, stemming from President Obama's January 2009 call for every American to have an electronic health record by 2014. For patients, the key challenge seems to be ensuring sufficient privacy and security protection to give individuals confidence in the EHR systems and the use of their data, to get them to want to have their health records in electronic form at all. Putting patients in control of their data and capturing and using patient consent and usage preferences seems to be the favored way to engender trust among individuals, but in doing so the value of health information exchange in improving quality of care may be negatively impacted. If consent is enabled at a level of granularity that allows individuals to keep certain portions of their health records hidden, the result for anyone requesting access to those records through health information exchanges may be incomplete data. Depending on the nature of the data omitted from an ostensibly comprehensive view of a patient, the risk of clinical mistakes due to incomplete records goes up, threatening the improvements in quality of care and reduction in medical errors that electronic health records are intended to produce.

The importance of complete information in clinical care settings is well established. It's not by accident that data disclosure for the purpose of treatment is explicitly exempt from consent requirements that apply to some other uses of health data under the HIPAA Privacy Rule. The Health IT Policy Committee has among its members practicing physicians whose views illustrate the two sides of the granular consent debate:  Dr. Charles Kennedy of Wellpoint shared an example of situations where the access by one type of practitioner to health record data related to a different type of care (specifically, an internist seeing medications prescribed for a patient by a psychiatrist) upset the patient in question and fell short of yielding the sort of privacy protections patients seem to want. In contrast, Dr. Michael Klag of Johns Hopkins objected to the idea of giving patients such granular control over their health records, even if patients are made aware of the potential dangers of withholding medical information. The approach of requiring data disclosure for treatment (that is, of exchanging data without seeking consent) might satisfy clinicians, but we noted in this space last week, surveys suggest that absent some degree of control over health data disclosure, many patients may opt to withhold information from their doctors rather than have the information become part of their records. It is hard to imagine how better health care outcomes can result if individuals are able to selectively withhold data from medical providers.

Finding the right balancing point between patient privacy and consent and optimizing the utility of data shared through health information exchange is more a business and policy problem than it is a technical challenge, although the technical means of enabling granular consent in EHR or supporting health IT systems are far from trivial. It seems that managing consent on the basis of the purpose for which health data is requested might be a more suitable starting point for finding a workable solution to this issue. Such an approach has the advantage of following the requirements of all the major federal privacy laws and being consistent with the Nationwide Privacy and Security Framework that includes the core privacy principles upon which the Privacy Act and other legislation are based and which privacy advocates argue should be directly reflected in health IT initiatives like the NHIN and in health IT adoption programs like meaningful use.

Thursday, April 22, 2010

Accountability and enforcement, not just policy, needed to produce trust framework for health information exchange

At yesterday's monthly meeting of the Health IT Policy Committee, a briefing provided by the leads of the Committee's NHIN Workgroup described the need for a health information exchange (HIE) trust framework and spelled out five components the workgroup members consider essential to overcome some of the barriers to greater HIE adoption. Notable among these essential elements is "accountability and enforcement," which to the NHIN Workgroup means "each participant must accept responsibility for its exchange activities and answer for adverse consequences." While it may sound obvious, the inclusion of an enforcement mechanism is a significant departure — and in our opinion, a welcome and necessary one — from the trust models articulated for health IT in the past and more broadly for healthcare security and privacy requirements in general. More typical is the sort of voluntary compliance model used for HIPAA enforcement — investigations against alleged violators of the HIPAA Privacy Rule or Security Rule are launched in response to complaints filed by patients or other healthcare stakeholders, but not as the result of direct monitoring of covered entity actions. There are no proactive HIPAA audits performed by the government; while the HHS Office of Civil Rights (OCR) has the authority to conduct "compliance reviews" of covered entities at any time, as a general rule OCR initiates such reviews only after receiving complaints about an entity. This lack of direct monitoring or proactive enforcement is one key reason why there have been so few criminal prosecutions under HIPAA, and such a voluntary violation reporting model does little to instill confidence that the legal obligations and constraints HIE participants agree to when they sign data sharing agreements will be followed. To date, the Nationwide Health Information Exchange (NHIN) governance model has relied on a legal agreement — the Data Use and Reciprocal Support Agreement (DURSA) — that obligates its signatories to be monitored, but no regular monitoring capability is yet in place, and even when implemented, such monitoring will not extend to the individual participants' own security and privacy practices.

Against this historical backdrop, the notion even within an as-yet conceptual framework of specifying security and privacy requirements for HIE participants coupled with enforcement is a positive step forward. It remains to be seen what form this enforcement might take, and similarly whether any sort of technical enforcement or automated compliance monitoring might be sought. The NHIN Workgroup briefing suggests that self-certification and entity self-assertion of compliance may be among the valid means of enforcement, but also implies that organizational monitoring may also be employed whether by government, other HIE participants, third party authorities, or some combination of these. Absent such objective enforcement, it is hard to see how HIE participants can have sufficient confidence in others to live up to their legal obligations. The operational prerequisites for establishing trust frameworks among disparate entities — especially those with different goals and potentially mis-aligned business objectives — is a compelling subject area for further research.

Tuesday, April 20, 2010

Oral arguments in Quon Supreme Court case suggest narrow ruling is likely

The Supreme Court heard oral arguments in City of Ontario v. Quon yesterday, and initial reactions in legal circles on the way the plaintiff's counsel argued the case and the questions raised by the justices seem to suggest a narrow ruling is likely in this case, rather than one setting a significant precedent or establishing doctrine on employee expectations of privacy in the workplace. While we suggested such an outcome in advance of the session, the transcript released by the Court shows that the justices spent relatively little time on the distinction between the City's official written policy regarding personal use of city-owned computing resources and the oral policy in place between Quon and other members of the SWAT team and their immediate supervisor. While plantiff's counsel offered two arguments to the Court:  first, that contrary to the 9th Circuit's finding in the case, Quon had no reasonable expectation of privacy for the content of text messages sent using his city-owned and -issued pager; and second, that even if he had such a reasonable expectation, the city's inspection of his text messages was a reasonable search. Both Chief Justice Roberts and Justice Ginsburg suggested that if you take as a given that the wireless provider with which the city contracted for the pager service, Arch Wireless, violated the Stored Communications Act (SCA) by turning over transcripts of the messages sent using the pager, then the SCA would help bolster Quon's claim of a reasonable expectation of privacy and would call into question the legality of the city subsequently looking at those transcripts, even if the city was not itself in violation of the law. To his credit, plaintiff's counsel seemed well prepared for that line of questioning, citing recent precedents where the Supreme Court has held that the fact a law was violated is insufficient to produce a reasonable expectation of privacy.

Parties arguing in defense of Quon did address the issues with the city's official policy, suggesting that the best way for an employer to eliminate reasonable expectations of privacy among employees is to make it clear through comprehensive and explicit policies that no such expectation exists. The Court did not appear willing to accept this approach, noting not only the need established in O'Connor v. Ortega that "Given the great variety of work environments in the public sector, the question whether an employee has a reasonable expectation of privacy must be addressed on a case-by-case basis." This line of thinking also echoing the reasoning of the New Jersey state Supreme Court in its recent Stengart ruling that even a carefully crafted and explicit policy cannot invalidate all potential employee claims to privacy of personal communications.

Counsel for the defense (Quon) did zero in on the understanding Quon and his co-workers had with their supervisor, arguing that that alone was sufficient to constitute a reasonable expectation of privacy, even if it was contrary to the official city policy. The Court, especially Justice Breyer, pressed defendant's counsel to explain why reading the text messages wasn't a logical, reasonable way to satisfy the city's desire to know how much of Quon's pager messaging was personal and how much was work related. Chief Justice Roberts seemed amenable to some of the alternative methods the city could have used that didn't involve actually inspecting the content of the messages, but Breyer and other justices seemed unsatisfied with defense counsel's responses. Plaintiff's counsel opted to reserve his last three minutes to rebut the defense, which he tried to do by suggesting that there was not in fact any difference in terms of privacy expectations between the official policy and the informal one, with the point once again being that no reasonable expectation of privacy should be afforded Quon.

Regardless of the breadth (or lack thereof) of the final ruling, it seems that the defense may have the harder case to make, inasmuch as it has to convince the Court that the expectation of privacy was reasonable and that even if that expectation as found by the 9th Circuit is upheld, that the actions by the city to look at the text messages also constituted and unreasonable search. For its part, the city may win reversal if the Court accepts its position on either the (absence of a) reasonable expectation of privacy or on the reasonableness of the search.

Sunday, April 18, 2010

Additional federal legislation may be needed to protect data on students

Congressman John Kline, a Republican from Minnesota and the ranking minority member of the House Committee on Education and Labor, publicly expressed concerns last week about potential risks to personal information on students that collected and maintained in state-level data warehouses. Kline spoke after an April 14 hearing on data used to track performance of K-12 school children, during which the Committee heard testimony from state and local education administrators as well as the lead author of a 2009 Fordham Law School study on children's educational records and privacy. Kline stressed the need for federal, state, and local level measures that ensure student and privacy family rights are protected. While such sentiments may seem prosaic, when focusing on state or district-level databases maintained by authorities other than educational institutions, there does seem to be a significant gap in the coverage of current federal laws on the privacy of student information. Joel Reidenberg, Director of Fordham's Center on Law and Information Policy, reiterated in his testimony before the Committee that many state practices observed and reported in the course of the Center's study violate provisions in relevant federal laws, but without consequence because the laws in question do not apply to state or local government actions.

The prevailing federal law on privacy of information in student records is the Family Educational Rights and Privacy Act (FERPA), which includes a variety of rights for adult students and parents of minor students as well as restrictions on the use and disclosure (without consent) of student records by educators, school administrators, and institutions in general. FERPA applies at federal, state, and local levels, but only covers schools receiving funding from a U.S. Department of Education program. Significantly, this exempts many private, parochial, and charter schools, although with respect to the state data warehouses about which Rep. Kline noted his concerns, it seems unlikely that data on non-public school students would be collected as regularly as would data on public school students. To the extent that state educational databases are maintained by state government agencies or similar authorities, rather than institutions themselves, FERPA's rules simply do not apply.

Without specific attention to student records or education information, there are other federal laws that constrain data collection from individuals, particularly children. The most general of these is the Privacy Act (5 U.S.C. §552a), which stipulates several prerequisites and conditions that must be met before personally identifiable information can be collected from any U.S. citizen. The Privacy Act reflects the Fair Information Principles published in 1973 by the U.S. Department of Health, Education, and Welfare, notably including transparency (that is, databases should not be secret), notice of intended use, and prevention of additional uses without consent. The Fordham study suggests that many states fail to provide transparency about the data they collect and maintain, and that they impose few restrictions on purposes for use of their data, including new or additional uses distinct from the purposes for which the data was originally collected.

A much more narrowly defined set of privacy practices stems from the Children's Online Privacy Protection Act (COPPA), which lays out a number of requirements for online entities that collect personal information from children under age 13. COPPA applies to all personal information, but focuses only on data collected online from individuals, so does not cover transfers of data between institutions, even for children under 13. The law also says nothing about data collection from minors older than 13. Despite the general lack of direct relevance to the state educational database situation, privacy advocacy organization such as the Electronic Privacy Information Center (EPIC) have cited the Fordham study as an example of practice that violate the spirit, if not the letter, of COPPA by ignoring the sort privacy protections codified into the law in less narrowly defined contexts.

The failure of most current federal legal requirements to apply to state or local government authorities is one possible explanation for the apparently common practice at the state level of ignoring well established privacy principles that are codified into law constraining the behavior of educational institutions like schools and school districts and of federal agencies. One possible resolution for this problem would be to extend student record privacy protections to apply not only to institutions collecting and storing information on their students, but also to public and private sector entities that receive, aggregate, or make available student records or data contained in them.

Thursday, April 15, 2010

FTC, Congress trying to make commercial privacy more manageable

In two separate developments this week we see efforts from both the executive and legislative branches intended to make it easier for financial institutions to comply with regulations on privacy practices required under the Gramm-Leach-Bliley Act (GLBA). For its part, today the Federal Trade Commission (FTC) announced the availability of an online form builder to help financial institutions draft the privacy notices that the institutions are required to provide to their customers. By following a simple (one page) set of instructions, users seeking to create privacy notices are directed to one of four PDF templates, each of which is two pages long and has a set of highlighted areas where institutions can insert their own content to explain what they do with customer personal information. The four versions of the template correspond to the possible combinations of two attributes:  whether or not an opt-out provision exists and whether or not affiliate marketing is included. More detailed guidance on content required to be included in each section of the form was published in the Federal Register on December 1, 2009. The provision to create and make available such an optional model form was included in GLBA (15 U.S.C. §6803(e)).

On the legislative side, yesterday the House of Representatives passed the Eliminate Privacy Notice Confusion Act (H.R. 3506), which if enacted into law would amend GLBA (15 U.S.C. §6803) to add an exception to the current requirement that privacy notices be provided annually to customers, if the institution's information disclosure policies and practices haven't changed since previous notice was provided, and if the provisions under which the institution discloses non-public personal information fall entirely within the statutory exceptions to prohibited disclosure already in the statute. The bill is notable in another respect tangential to its content:  it was passed as a "stand-alone" bill for a single purpose, rather than as part of some larger, more complex piece of legislation. It is the second such bill sponsored by Rep. Erik Paulsen, a freshman Republican Congressman from Minnesota.

Supreme Court takes on expectations of privacy for personal text messages

As noted in a post here about a week and a half ago, the Supreme Court will hear arguments on April 19 in City of Ontario v. Quon, which is an appeal by the city of a 9th Circuit Court decision in the case, which was then known as Quon v. Arch Wireless, and which went in Quon's favor, finding the city had violated his 4th Amendment rights by examining the contents of personal text messages Quon had sent using a city-issued pager. There is a lot of attention focused on this case and the possible implications a ruling either way might have for employee expectations of privacy in the workplace, or outside the workplace when communicating with employer-owned devices. Public sector organizations in particular are concerned that if the 9th Circuit decision is affirmed, these organizations would be severely constrained in their ability to monitor electronic communications among law enforcement personal, between teachers and students, or among employees in general.

Given the Court's past tendency to avoid establishing sweeping precedents extrapolating from the specific circumstances of a case before them, our expectation is for a ruling more narrowly focused to the atypical facts in this case. The Ontario police department (Quon's employer) had an explicit policy in place that clearly gave employees no expectation of privacy when using city-issued computers or resources, although to be fair there was no specific policy that addressed text messaging uses of pagers. Nonetheless, the official policy is not a point of contention; what is relevant is that Quon's immediate supervisor made a separate arrangement with Quon and some of his fellow officers that was in conflict with the city policy. It's not at all clear that Quon would have prevailed in his appeal had the conflict between formal and informal policies not been involved. While other courts have found that even unambiguous employer policies may not be able to override employee expectations of privacy for some types of content (such as communications between an employee and an attorney), there doesn't seem to be anything in the nature of the personal text messages in the Quon case that would demand special protection.

In advance of Monday's argument, the plaintiff's reply brief has yielded some observations by legal experts on the line of reasoning the city will use to plead its case. Orin Kerr noted his surprise at the attention focused on the Stored Communications Act (SCA) and Quon's former (successful) argument that the provisions of the SCA create a reasonable expectation of privacy, which was violated by the city when it read his personal text messages. Presumably to challenge the reasonable expectation of privacy argument under the 4th Amendment, the city feels it needs to challenge the 9th Circuit's interpretation of the SCA as well. This is an interesting tactic given that the Supreme Court granted cert. only on the 4th Amendment appeal by the city, but denied cert. on Arch Wireless's appeal (Arch Wireless is not a party to the case before the Supreme Court) of the ruling that it violated the SCA. The primary challenge for the city in overturning Quon is convincing the Court that Quon's expectation of privacy was not reasonable, and that appears to be a tall order given the facts of the case that aren't in dispute. However, an affirmation of Quon could hardly be construed as an unequivocal victory for employee expectations of privacy. Instead, such a ruling would highlight the critical importance of writing explicit policies covering acceptable uses of employer-owned resources and, if personal use is to be allowed, of avoiding vague or subjective terms like "limited" or "occasional" and instead being clear on exactly what will be permitted and under what terms.

Wednesday, April 14, 2010

Sharing data in EHR systems without consumer preferences may lead to information withholding

An unintended, if not wholly unexpected, consequence of the health care industry's heavily incentivized move towards use of electronic health records and sharing data contained in those records appears to be a greater likelihood that individuals will withhold information from their doctors or other medical providers to prevent the possibility of the information being shared with others. According to the results of a study released on Monday by the California HealthCare Foundation on consumers and health information technology, about one in six respondents said they would conceal information from their doctors if their medical data was to be stored in an EHR system that enabled health data sharing with entities beyond the provider, and another third would consider concealing information. The study also found that when individuals begin using personal health records (PHR), they become more involved in their own health care and know more about their own health. National adoption rates for PHRs remain quite low overall, but have shown tremendous growth in the past couple of years, a trend which is generally expected to continue in parallel with the anticipated rise in health care industry penetration of EHR systems.

The lingering general concerns over patient privacy are hardly surprising, particularly given the lack of attention given to date on enabling individuals to assert more control over the use and dissemination of their personal health information. Interestingly, the CHCF study found that while PHR users are concerned about the privacy and security of their data stored in PHR systems, they are not so worried about privacy as to believe that privacy issues should stand in the way of evaluating and realizing some of the anticipated benefits of health IT. Privacy advocates continue to stress the importance of adding support for consumer preferences and, especially, soliciting and managing consent for disclosure of personal health information either based on the content of the data or the use for which it is being disclosed. Too much discretion with respect to health record disclosure can have the same result as withholding information from doctors:  when electronic health records are consulted for use in clinical care settings, the providers doing the treatment may be making decisions based on incomplete medical information. This situation subverts the key health information sharing objectives such as improving the quality of care or reducing negative outcomes due to errors. Health information exchange providers, including government entities, will need to continue to balance health care and public health outcomes with individual expectations of privacy and support for privacy principles such as individual choice and disclosure limitations.

Key government security initiatives making slower than anticipated progress

The Government Accountability Office released two reports, completed in March and released publicly on Monday, that highlight slower-than-expected progress being made on key government-wide information security initiatives. The first report focuses on the Federal Desktop Core Configuration (FDCC) program, which mandates minimum security baselines for client computers in federal agencies running Windows operating systems (so far it covers XP and Vista). The program is managed by the National Institute of Standards and Technology, who publishes the secure configuration specifications (akin to DISA security technical implementation guides but somewhat less robust and covering fewer types of systems) and produced the XML-based security content automation protocol (SCAP) to facilitate the process of validating secure configuration settings using automated scanning tools. FDCC compliance has been required for federal agencies since February 1, 2008, with agencies obligated to test and validate appropriate security settings for computers in their environments as part of the continuous monitoring required under FISMA. The GAO report notes that as of last September, none of the 24 major executive agencies had adopted the required FDCC settings on all applicable workstations, and 6 of the 24 had yet to acquire a tool capable of monitoring configurations using SCAP. In the aggregate, it appears that agencies do not consider deviations from FDCC configuration settings to be a source of significant security risk, as many agencies that reported such deviations also indicated they had no plans to mitigate them. For its part, GAO's recommendations for FDCC seem largely focused on improving agency education and support for implement FDCC monitoring and achieving compliance. The report also included agency-specific recommendations for 22 of 24 agencies.

The second report addresses progress to date on the government's initiative to secure agency connections to the Internet under the Trusted Internet Connections (TIC) program, which was announced in late 2007. Under TIC, agencies were instructed to drastically reduce the number of connections of their networks to the Internet, with an overall program goal of reducing total government Internet connections to 100 or fewer. Agencies were required, as part of their IT infrastructure planning, to propose a target number of Internet connections (and the locations of those connections) to OMB by the end of June 2008. As an interim milestone, under TIC agencies were supposed to reduce the total number of existing Internet connections to approximately 1,500 by then end of the 2009 fiscal year, but GAO found that overall the government was running about 15% short of that goal, and of the 19 agencies reporting the status of their connection consolidation efforts,  none had reached an 80-100% level of progress, and just about every agency still has far more Internet connections than they "should" based on the plans of action they produced more than 18 months ago. While it's possible that the overall targets and timeline for the TIC initiative were overly ambitious or unrealistic, there remains a sense of urgency regarding TIC because of its relation to one of the key national critical infrastructure protection programs, the Department of Homeland Security's Einstein intrusion monitoring system, which itself is in pilot phase with aggressive plans for widescale roll-out. The Einstein program predates the TIC by several years, but the practical implementation of comprehensive government network traffic monitoring is something only made feasible if the number of sites being monitored is reduced to a manageable number.

The vision for Einstein (as shown in the graphic above, which appears in the GAO report) includes placing intrusion detection sensors at every TIC-approved point of connection to the Internet, although in practice the number of monitoring sites may be less than suggested by TIC connection counts if DHS succeeds in deploying the Einstein system within the environments of major network service providers to the government.

Monday, April 12, 2010

Mistakes in UK NHS organ donor consent data show importance of validating data from outside sources

As reported in the Daily Telegraph, the British National Health Service (NHS) is blaming an IT error for the presence of inaccurate consent information for about 800,000 people about their wishes for organ donation after death. As a result of the incorrect consent information, organs were harvested from as many as 20 people whose consent had not in fact been given prior to their death. The information about organ donation preferences was initially captured — as it often is in the U.S. — as part of the process of obtaining a driver's license. Data from the Driver and Vehicle Licensing Agency was transferred to NHS Blood and Transport over 10 years ago. The errors were apparently introduced at the initial time of data transfer, but they weren't discovered until NHS contacted people it thought had consented to organ donation, as part of a routine written communication program with people in the NHS donor registry.

This incident shows the importance of verifying the accuracy and integrity of data integrated with or retrieved or transferred from external sources, especially given the need to instill trust among individuals whose data will potentially be made available or used by government and private sector organizations under widespread use of electronic health records. This sort of potential for error arises is many data aggregation contexts, notably including U.S. credit reporting bureaus, but when medical conditions or patient preferences are involved, the potential consequences from acting on incorrect information are obviously more serious than a denial of credit.

Because this situation revolves around a very specific form of consent, there are clear parallels to problem of capturing, managing, and honoring consent in electronic health records and health information sharing for a variety of purposes (especially those for which consent is required under the law). Finding ways to manage consumer preferences in health care is an area the U.S. government has been working on since the early days of the American Health Information Community (AHIC), and ONC has been working on consumer preferences since at least 2008, when they were identified as a gap in use cases prioritized for development by the American Health Information Community (AHIC), and the Office of the National Coordinator (ONC) has produced a Consumer Preferences Draft Requirements Document that is likely to serve as a key input should ONC move to add consumer preferences criteria to any of its adopted standards, potentially including adding them to meaningful use criteria.

The desire to be able to rely on data integrated through initiatives such as health information exchange (or, for that matter, the Information Sharing Environment (ISE) in a homeland security context), and the lack of ability to do so, remains a central issue for future intended uses of health IT for clinical decision support. The current state of the industry and the technology is that there is no consistent mechanism for asserting integrity (to be fair, this problem exists for non-electronic records too), and is only one of several issues that can be encountered with composite or virtual data sets. These other data challenges include incompleteness of data, errors due to omission (including those resulting from withholding consent to disclose), Byzantine failure, and the need to reconcile multiple or conflicting values for the same data element as represented in different sources about the same individual.

Saturday, April 10, 2010

Virginia enacts limited-scope medical information breach notification law

Last month, the Virginia General Assembly passed a new law, to take effect on January 1, 2011, that will implement new disclosure notification rules for breaches of medical information about Virginia residents. The new law appears intended to fill a fairly narrow perceived gap in information breach disclosure requirements already in place at both the state and federal level, including coverage for medical information specifically as well as personal information generally. The specific attention to medical information makes the new law complementary to several measures passed during the 2008 legislative session that strengthened existing statutes covering crimes involving fraud to add protection against identity theft (§18.2-186.3) and require notification for breaches of personal information (§18.2-186-6). Many of the definitions and notification procedures included in the recently passed bill are the same as those found in the earlier code on breach of personal information notification, including the definitions for a "breach of the security of the system," what constitutes "notice," and the applicability of the notification rules even when breached information is encrypted, if the disclosure involves anyone who might have access to the encryption key.

One area where the medical information breach notification definitions differ from current statutory language is in what is considered an "entity" subject to the requirements in question. The rules for breach of personal information notification define an entity quite broadly as "corporations, business trusts, estates, partnerships, limited partnerships, limited liability partnerships, limited liability companies, associations, organizations, joint ventures, governments, governmental subdivisions, agencies, or instrumentalities or any other legal entity, whether for profit or not for profit." (§18.2-186-6, section A) In contrast, the breach of medical information notification says an entity is "any authority, board, bureau, commission, district or agency of the Commonwealth or of any political subdivision of the Commonwealth, including cities, towns and counties, municipal councils, governing bodies of counties, school boards and planning commissions; boards of visitors of public institutions of higher education; and other organizations, corporations, or agencies in the Commonwealth supported wholly or principally by public funds." (§32.1-127.1:05, emphasis added) This appears to limit the coverage of the new law to public sector organizations, and to private organizations receiving significant public funding. It is not at all clear that a private sector company operating independently of Virginia funding would be subject to the new rules.

In terms of applicability in the health space, the Virginia law makes no attempt to preempt or augment federal health information breach disclosure requirements under HIPAA or HITECH. Instead, it specifically excludes from coverage any HIPAA-covered entities (defined under the law as health care plans, health care providers, or health care clearinghouses) or business associates of those entities (individuals or organizations that perform functions involving the use or disclosure of protected health information on behalf of a covered entity), and also excepts non-HIPAA covered entities that are subject to the FTC's health data breach notification rules established under authority of the HITECH Act. Since we're talking about health data, presumably a large proportion of the wholly private organizations that seem to operate outside the coverage of the new Virginia law would already be subject to federal heath data breach notification laws, but there appears to be a gap in coverage for third-party data stewards, handlers, or transmitters that do not process or transform the data, and therefore do not fall under the definition of health care clearinghouses.As state, regional, and national efforts to facilitate health information exchange continue to develop — whether to satisfy information sharing requirements under meaningful use or to facilitate local or wider scale interoperability among data sources — the Virginia General Assembly might want to consider how medical information breach notification rules can be extended to cover potential new market entrants in health information exchange.

Thursday, April 8, 2010

Public-private sector debate on health IT turns to whose security is weakest

Security concerns remain a major sticking point on electronic health records, health IT in general, and greater levels of health information exchange and interoperability among potential public and private sector participants in those exchanges. An article in the Wall Street Journal two weeks ago by Deborah Peel, privacy advocate and founder of the non-profit Patient Privacy Rights, argued that due to the lack of comprehensive privacy, individual consent, and information disclosure controls, medical records simply aren't secure. The opinion piece, which restates key themes that Peel has been expressing publicly for at least five years, served as fodder for a Fox News commentary that suggested Americans will be reluctant to put their medical data online not just due to the lack of consent and personal control over its disclosure, but because the government will have access to electronic health records. It's not entirely clear that this is a fair characterization of Peel and other privacy advocates' position, and it's certainly a more partisan take that what many insist is an issue that persists regardless of which party is in power.

The tone of the current debate raises another point of contention about whose security is really the most problematic when it comes to protecting health information online. The privacy debate is focused as much on maintaining confidentiality as it is about consent for disclosure or control of data dissemination, so the simple fact that the government's vision for electronic health records includes widespread interoperability and data exchange among health information systems logically produces an outcome where a given record is potentially accessible to many more parties, be they from government or industry. For all the conservative hand-wringing on this issue, there appear to be just a strong a concern among government agencies about data confidentiality and security measures, but the government's concerns are about private sector security practices.

Speaking at an AFCEA event on Health IT in Maryland this week, Centers for Medicare and Medicaid Services (CMS) CIO Julie Boughn said that the security measures in place among some of the private sector organizations that seek to exchange data with CMS are so lacking in some cases as to be "almost embarrassing." CMS is in a position to notice such deficiencies going forward as well, as it will serve as the agency administering the measures for meaningful use under which eligible health providers and professionals will seek to qualify for incentive funding to buy, implement, and use electronic health record technology. Boughn has long maintained that, due to the requirements they must follow under FISMA, federal agencies' information security is stronger than the equivalent security provisions under the HIPAA Security Rule or other security control standards applicable to private sector health care organizations. She echoed that position again this week, suggesting that organizations seeking to use in nationwide health IT infrastructure and participate in health information exchange initiatives with government agencies will have to follow FISMA just as the agencies do.

It's hard to reconcile the idea of imposing FISMA on non-government agencies with the complete absence of any references to or incorporation of standards or processes from FISMA in either the Health Information Technology for Clinical and Economic Health (HITECH) Act or in the meaningful use measures, EHR certification criteria, and technical standards proposed to date. Instead, the focus for security and privacy in health IT has been strengthening and otherwise revising provisions in the Health Insurance Portability and Accountability Act (HIPAA), the federal regulations codified under which serve as the basis for the single meaningful use measure for security. It seems the government leaders on health IT and the health care industry are pretty closely aligned on using the provisions of both the HIPAA Security and Privacy Rules, so it will be interesting to see if more of the FISMA framework makes its way into the overall health IT picture.

Feds appear committed to cloud computing; potential cost savings outweight security concerns

Federal Chief Information Officer Vivek Kundra stressed his belief that the federal government needs to get out of the business of building data centers and managing IT infrastructure, hardware, servers, and application software, and instead should embrace cloud computing and store agency data on servers hosted by companies that specialize in managing IT and securing hosted information. Kundra's remarks came at an event hosted by the Brookings Institution, held in part to highlight the release of Brookings' recently completed report, "Saving Money Through Cloud Computing," which suggests that agencies that have moved to adopt the cloud model have realized savings of 25 to 50 percent in IT operations costs. With overall federal IT spending in excess of $76 billion, and a quarter or more of that spent on hardware, software, and file servers, the potential savings across the government certainly seem significant. Another theme expressed at the Brookings event was the suggestion that fears about physical, network, and data security for cloud computing may be misplaced, given that not all federal agencies have stellar track records in protecting the systems and data they manage themselves, and that major cloud service providers have huge incentives to keep their government customers' data protected.

The cloud computing discussion seems to be ratcheting up a few notches this spring, as evidenced by the large crop of government and industry events focused on the topic. Later this month SYS-CON media will host a Cloud Computing Expo in New York City April 19-21, and the Interop show in Las Vegas at the end of the month will feature an Enterprise Cloud Summit on April 26. On the government-focused side, 1105 Government Information Group is hosting its 2010 Cloud Computing Conference and Exposition in Washington, DC May 3-4, and NIST will host a Cloud Summit on May 20 intended to accelerate the development of standards that will lead to initial specifications for federal cloud computing later in the summer.

Monday, April 5, 2010

Conspiracy theories alive and well on government's role in health IT

In an otherwise unremarkable meeting of the Health IT Standards Committee on March 24, Dr. David Blumenthal, HHS' National Coordinator for health IT, made public statements addressing and formally denying rumors that the Office of the National Coordinator's (ONC) plans to use the National Information Exchange Model (NIEM) was really intended to enable government monitoring and control of electronic health information.When Blumenthal's comments were first reported, it seemed like they were driven largely by fundamental misunderstandings of what NIEM is (and isn't), but given the wide circulation of these suspicions about underlying motives for health IT standards adoption, among the press, industry blogs, and advocacy groups, perhaps a few clarifications are in order.

The primary concerns seem to fall into two main areas, separate but related to each other. First, because ONC leads government efforts in electronic health records adoption and health information exchange, government agencies are certainly among the participants seeking to get greater access to health record data. This seems to have led to a presumption that one of the purposes of widespread government-sponsored health information exchange is to make medical data on individuals available to government agencies, and it's not just CMS we're talking about, but intelligence-gathering agencies like the Justice Department, National Security Agency, and CIA. Given the federal government's plan to implement monitoring and intrusion detection and prevention for all network traffic to and from government agencies through the Department of Homeland Security's Einstein program, it's not that hard to understand how some would make the leap to assume that the intelligence community will be looking at your health data. Second, the stated intention by ONC to document and publish health data exchange standards through the NIEM -- an initiative originally started to facilitate information sharing in support of anti-terrorism activities by the Justice Department, DHS, and others -- seems to have led to an assumption that if health data exchange standards are managed through NIEM, this will somehow enable information formatted using the standards to be secretly captured by or routed to intelligence gathering agencies.

There's no getting around the fact that once large quantities of health data is stored in electronic format and made available for access among organizations that have a legitimate need to use it, is will be easier for personal health information to end up in more places than it is now, with paper-based storage or stovepiped electronic medical record databases. Privacy advocates such as the Patient Privacy Rights Foundation have pressed ONC to adopt standards, rules, and procedures that would mandate individual consent before health record data is shared with entities beyond the organization that maintains the record itself, and while not openly challenging Blumenthal's assertion that data exchanges using ONC standards and services will not be shared with government agencies, would prefer a legally binding requirement rather than a promise. This makes sense on its face, but seems to imply that there is something new about the potential for disclosure-without-consent of health data to law enforcement or intelligence agencies, when such disclosures are explicitly allowed under the provisions of the Health Insurance Portability and Accountability Act (HIPAA, specifically 45 CFR §164.512(f) for law enforcement and §164.512(k)(2) for national security and intelligence activities). Such access under the current law requires appropriate and authorized use, so the only thing that would change under widespread adoption of electronic records is the ease of accessing the records online, rather than requesting them directly from the providers or other stewards holding the data now. These sorts of consent exceptions, including the core HIPAA purposes of treatment, payment, or administrative operations, exist for both paper and electronic health records.

The second of these objections is the one that's really hard to fathom. NIEM is a collaborative initiative that produces domain-specific information exchange standards, and makes the schemas and corresponding documentation available to anyone who wants to use them. The "M" in "NIEM" is for model. It is not a system; it is not a record-keeping database; it does not store or transmit or process any records or messages formatted according to its standards; and no one who uses NIEM standards to format their own data for exchange sends that data to NIEM. NIEM standards are distributed as files in .csv, Excel, Access, XML, and other formats, not as executable programs. Suggesting that NIEM is a "Trojan horse" that will surreptitiously send data to government agencies without their knowledge demonstrates nothing so much as a basic misunderstanding of what NIEM does (or perhaps what a data standard is).

None of the preceding discussion is intended to diminish the valid concerns over individual privacy protections and control of personal health information. There is a general (and perhaps justified) lack of trust between patients and provides, insurance companies, and state and federal health agencies, particularly as to whether any of these entities will take the necessary measures to protect personal health information shared among them. ONC has not been able to resolve this lack of trust or mitigate the concerns underlying it, not least because current standards, processes, and services proposed for use by entities exchanging health information do not provide any mechanisms to ensure that use and disclosure of health data is always authorized and appropriate. Instead, legal agreements and frameworks have been put in place under which exchange participants agree in writing to access data for for permitted purposes, but in the absence of enforcement mechanisms, such agreements will do little to dispel the distrust many individuals feel about the organizations holding their health data.

Sunday, April 4, 2010

Employee expectations of privacy in the workplace only improving in very specific contexts

With the current interest focused on revisiting the Electronic Communications Privacy Act (ECPA), including plans announced by members of both the House and Senate to initiate formal reviews of the 1986 law and the extent to which its provisions should be updated to reflect the modern state of communications technology, it seems like a good time to check on the state of privacy in the workplace. The baseline position is that companies have broad latitude when it comes to capturing and monitoring communication occurring in the workplace, especially when the communication uses company-owned or provided equipment and services. Assuming they follow the stipulations about electronic communications monitoring in ECPA, such as describing planned, potential, or actual monitoring activities and providing notification of them to employees, customers, partners, or others who will be subject to the monitoring, companies have the right to watch what's happening within their own environments. Many states require companies to obtain consent of one or both (or all) parties to an electronic conversation before it can be monitored or recorded, but when it comes to employees, as long as the monitoring activity is provided to employees as a standard operating practice, employees are assumed to have given consent by virtue of agreeing to work for the company. The standard by which non-employees can be considered to have given consent varies somewhat by jurisdiction and type of communication, but in general, if the intent to monitor is disclosed up front, the continued participation by a party to the conversation is tantamount to consent. This is the primary driver between the familiar recorded declaration, "This call may be monitored or recorded for quality assurance purposes." If you don't want your interaction with a company recorded, presumably you hang up and send an email or write a letter instead.

So the starting assumption for employees would seem to be, you have no reasonable expectation of privacy in the workplace. Some recent well-publicized court cases have suggested that this statement is too absolute, and in fact employees may have some expectation of privacy for their personal communications, even when those communications take place using employer resources. While there is no intention to trivialize these victories for personal privacy, the rulings address very specific sets of facts, so may not be indicative of any significant retrenchment of employer's rights to monitor employee communications. The cases are also instructive to both companies and their employees in terms of what expectations of privacy are likely to be considered "reasonable," and clearly spell out the need for companies to be very explicit in writing policies governing employee behavior, communications, use of company systems and services, and their plans to monitor such behavior and enforce its policies. Perhaps the most remarkable implication of the cases recent argued and others cited as precedents within those decisions is that the Courts do not appear to hold individual employees accountable for having any knowledge of the functional or technical aspects of the electronic communication systems they use, whether that functionality is specific to their employer or a standard feature of widely used communications applications like email and text messaging.

In a case argued before the New Jersey Supreme Court in December and decided last week, employee Marina Stengart sued her former employer, Loving Care, for violating her right to privacy under attorney-client privilege when the company, using computer forensic analysis, recovered cached copied of emails between Stengart and her lawyer, who was helping Stengart in a lawsuit filed against Loving Care. The email communications used Stengart's personal, password-protected Yahoo! email account, which she accessed using her employer-issued laptop from within Loving Care's network environment. Stengart made no active attempt to store local copies of the emails; her intention seems to have been the opposite, and her low level of technical knowledge made her ignorant of the fact that web browsers routinely store copies of viewed web pages in a temporary cache on the computer running the browser. Because she didn't know about the temporary file cache, she made no effort to clear the cache before returning the laptop to her employer upon leaving the company. The company searched the computer it had issued her looking specifically for information that could assist Loving Care in defending against Stengart's lawsuit. The core question in the case that made it to the NJ Supreme Court is, by using a company-issued computer to access her web-based personal email account, did Stengart waive her attorney-client privilege? The court said she did not, and remanded the case back to the trial court to determine an appropriate remedy, finding that the company, when it realized the emails were communications between Stengart and her lawyer, should have immediately notified her attorney and either returned or destroyed the emails, rather than examining their content. Essentially, the case only addresses employee expectations of privacy for personal emails exchanged with an attorney; it says little about the privacy of personal communications in general.

Another closely watched case is Quon v. Arch Wireless, the appeal of which the U.S. Supreme Court is scheduled to hear as City of Ontario v. Quon on April 19. In Quon, the key issue again is what right an employer has to monitor the content of personal communications (in this case, text messages sent with a pager rather than emails) made by employees using company-issued equipment. The 9th Circuit Court ruled in favor of the employee (Quon) in this case, and found that the city had violated Quon's 4th Amendment rights when it examined the content of his personal text messages. It also found the pager service provider (Arch Wireless) had violated the Stored Communications Act by giving the contents of the text messages to the employer. There are some specific facts in the Quon case that may limit the scope to which the ruling applies, whichever way the decision goes, including the fact that while the messaging device Quon used was issued by his employer, none of the communications traffic flowed through the communications systems or infrastructure owned by the employer, and that employees shared the usage cost for text messaging beyond a specified volume. The most directly relevant policy maintained by the employer also explicitly limits use of computers, email, and the Internet to official business, but the group with which Quon worked had a separately negotiated employee agreement under which employees could use the pagers for personal communication, although there is some contention as to whether Quon's use went beyond the limited amount considered acceptable under the agreement. Also, given the sexual nature of some of the content and some of the cases cited as informative by the panel in Stengart, Quon may face a bigger hurdle than Stengart in arguing his messages should have remained private, since their content seems to violate the acceptable use policy of his employer. The employer in this case is a city police department, so the involvement of a government agency (even at a local level) also makes this case different than one involving a dispute between employees and a private employer. Among the issues the Court will consider is whether an employee can have a reasonable expectation of private for personal communications when no official privacy policy exists for the city-issued devices in question.

The ruling in Stengart is useful (it's well worth reading the ruling itself; it's only about a dozen pages) in a few areas beyond the narrow scope of the facts in this case. Chief Justice Rabner, in describing the reasoning and legal precedents for the court's decision, provides a number of other cases that address secondary issues raised in the Stengart case, including the specificity required in company policies about personal use of company resources and monitoring of that use. Some of the cases cited involve (justifiable) company inspection of ostensibly private employee communications because of suspected criminal activity or violation of acceptable use policies, but neither of those situations apply to Stengart. Other cases also highlight the importance of addressing the extent to which the content associated with permitted Internet use will be monitored; while employees generally can claim no expectation of privacy when communicating using their employee email address and employer's email server or system, the same does not apply for email communication conducted outside the company environment using a personal, rather than company, email address. The court suggested that individual expectations of privacy, even when communicating with an attorney, are less justified when the employee uses a company email system for the communication. A 2006 state court decision from Massachusetts was cited not only as a precedent that the default browser behavior of storing local temporary copies of web-based emails viewed using the browser is not sufficient on its own to invalidate attorney-client privilege, and also to suggest that employee expectations of privacy, even when using a company-issued computer, are somewhat greater if the communication takes place from home or another non-company location, such as a scenario when personal email is sent or received using a company laptop connected to a home network and ISP. The court also specifically noted that no matter how specific Loving Care's policy might have been (in its actual form the court considered it ambiguous on how the company treated personal communications), no policy can override the compelling public policy interests supported by maintaining the privilege attached to attorney-client communications. This is another reason it is hard to generalize the findings in Stengart to other personal communication contexts — presumably similar findings in favor of individual privacy rights would only be made where the subject matter of the communication was explicitly a legally protected type of content.

As Stengart aptly illustrates, not all cases raise 4th Amendment issues, although there are many court cases and examples of criminal investigations that illustrate how the existence of probable cause in an investigation can and will override individual privacy protections, irrespective of company policies or legal requirements governing the treatment of certain types of personal information. There is of course a presumption in such 4th Amendment matters that the parties doing the investigation are acting appropriately in seeking to search for information and are in fact pursuing legitimate lines of investigation. A recent decision by the 11th Circuit Court illustrates one of the more egregious violations of this presumption, when an individual acting as a whistleblower on his employer was subjected to a search of his personal email by a local prosecutor who allegedly conspired with the employer and obtained a subpoena for the individual's email records under false pretenses, and then used that information to falsify evidence in order to charge the whistleblower with burglary and assault, neither of which actually occurred. Despite the fact that the prosecutor's actions are not in dispute, the 11th Circuit Court ruled that the individual's 4th Amendment rights protecting against unreasonable search and seizure had not been violated. Last week the Electronic Frontier Foundation joined the counsel for the individual in asking the 11th Circuit panel to review several aspects of its ruling, which the EFF asserts did not follow the law.

While we can't offer the sort of expert legal analysis on any of these cases that you might find from privacy lawyers like Hunton and Williams, there are some practical implications for both employers and employees that come out of the Stengart ruling. Following the logic the justices used in Stengart, employers should:
  • Have explicit policies in place about whether personal use of company resources is permitted at all, and if it is, what limitations (if any) there are on such use
  • Also spell out in explicit terms rights the employer asserts about use of and data stored within its computing and communications assets, network environment, and employer-provided services
  • If the employer does or intends to monitor employee communications, say so, and include in the scope of the statement all forms of media and types of communication that are subject to monitoring
  • Include statements about whether the contents of such communication will be examined and under what circumstances, recognizing that there may be certain types of content (attorney communications, including with internal counsel; health records; information about employees' children, etc.) that may be legally protected in ways that trump the employer's rights or desire to inspect the content
  • If, in the course of following the stated policy, content is identified that falls into one of the categories of information protected by state or federal privacy laws, stop reading and don't proceed further until checking with legal
  • If there are valid reasons to prevent employee use of personal email from work (such as data loss prevention), implement measures to block access to web-based email
  • Understand that the assertion of ownership or rights to monitor employee information does not apply the same way to communications conducted through third-party service providers, whether or not the employer pays for those third-party services
  • Make sure that the policies and procedures put in place comply with all relevant legal requirements, with special attention on regulations covering monitoring and interception of communications and rights to access stored content such as messages, call logs, or transaction records
The list above is far from exhaustive, but assuming a company wants to proactively minimize the reasonable expectation of individual privacy in the workplace, these practices would be constructive to that end. While all employers must balance employee productivity, convenience, and trust with restrictions on employee behavior in the furtherance of their business interests, it appears that employers can establish the clearest legal standing by completely prohibiting personal communication using company systems and resources.

For their part, there are also steps individual employees can take to help ensure their personal communications remain private, and to minimize the chance of inadvertent personal information disclosure such as what happened with Stengart. These include:
  • Read, understand, and follow your employer's policies on personal use of employer-provided communications equipment and services, including definitions of acceptable use
  • For any communications deemed sensitive by the employee, try not to use employer-provided resources to conduct those communications, even if policy says that you can
  • If you do use your employer-provided devices for personal communications, try to conduct them when offsite, such as using your home ISP to connect devices to the Internet, so your communications traffic doesn't flow through your employer's services and network environment
  • Don't use employer-provided email for personal communications
  • Learn enough about the tools you use (web browsers, email clients, messaging services) to understand whether local copies are being made of your activities "out in the Internet," and if so, learn how to prevent local storage (such as using "private browsing" features) or remove the copies afterward (understanding that merely deleting a temporary file cache may not prevent the later retrieval of the cache's content by a forensic analyst)
  • If you leave an employer, when you return your laptop, pager, PDA, smartphone, or other device, remove all personal data from the device and "wipe clean" the storage media using a tool like Eraser on a computer or equivalent comprehensive data destruction available on many handheld devices
  • The courts seem to believe that employee ignorance is no reason to diminish expectations of privacy, but this benefit only applies reliably for specially protected types of information, so don't rely on ignorance — instead be well informed and aware of the security and privacy implications of your environment

Thursday, April 1, 2010

NSA loses a round on warrantless wiretapping in federal court

A federal district court judge in San Francisco who has presided over several cases against the National Security Agency (NSA) and its now-defunct warrantless wiretapping program appears to have finally been presented sufficient direct evidence of such wiretapping to rule in favor of the plaintiff, and to hold the government liable for damages. The facts of this case were markedly different from previous, unsuccessful suits against the NSA, in that this time the plaintiffs were able to provide evidence that they specifically had been subjected to eavesdropping of their communications in a way that should have required a warrant, although none was obtained. Judge Vaughn Walker limited his ruling to the wiretaps used against the plaintiff, and did not address the legality (or lack thereof) of the NSA's surveillance program overall. From a legal standpoint, the case may have been more relevant for Walker's refusal to accept the government's assertion that the case should be dismissed without considering the merits of the plaintiff's claims in order to protect state secrets from potential disclosure in court. The state-secrets defense is a claim of executive power first asserted by the Bush administration and again put forward by the current Justice Department legal team. The government might have had an early indication of its poor chance of succeeding with a state-secrets argument, since the suit had previously survived such a challenge when the plaintiffs received an inadvertently disclosed confidential call record from the NSA that revealed the government's eavesdropping of the plaintiff's communications. While the ruling probably represents a small victory for those who continue to argue that the NSA should be called to account for its activities, the specific details in this case are troubling on their own, inasmuch as they demonstrate the government's apparent right to withhold relevant direct evidence if it hides it under a shield of national security.