Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Tuesday, June 29, 2010

National Strategy for Trusted Identities embodies claims-based identity management

Last week, the White House released a draft of its new National Strategy for Trusted Identities in Cyberspace (NSTIC), which is intended to create a so-called "identity ecosystem" in which individuals, organizations, and other entities rely on authoritative sources of their digital identities to enable trust in online interactions. The document was published through the Department of Homeland Security, one of many agencies and industry participants that collaborated on the Strategy, and includes many key elements that were called out as future action items in the administration's Cyberspace Policy Review, titled "Assuring a Trusted and Resilient Communications Infrastructure."

Trust in NSTIC context is limited only to the identity of the parties to an online interaction, so while having confidence in the validity of an asserted identity may help the parties may decisions about whether to engage in the interaction in question, the identity ecosystem envisioned in the Strategy provides insufficient basis for establishing the trustworthiness of entities, although it does allow for different participants to establish different sets of attributes about individuals that will be required in order to make authentication and authorization decisions.

With this in mind, it's important not to confuse trust in an entity's identity with trust in the entity itself. To engender trust in the entities, identity verification is necessary, but what is also needed is a clear explanation of the criteria that underlie the issuance of any credential presented to validate the identity, understanding that such criteria can and likely should vary depending on the context of the interaction. In the same vein, one of the things the Strategy makes clear is how important it is to separate the concepts (often spoken about in the same breath) of identification, authentication, and authorization. In general, an identity credential provider performs identity proofing (such as checking ID or other documentation if the identity proofing happens in person) and binds an individual identity to a digital representation, such as a certificate or other form of token, but often does not provide any information about what permissions the individual should have. These authorization decisions are entirely separate from identification and authentication, although identification and authentication are often prerequisites for granting authorization. This means that when considering authorization, an individual or entity evaluating the credentials presented should understand whether the issuance of those credentials took into account anything that informs the authorization decision. In the identity ecosystem as described, such consideration involves both the identity provider that establishes the digital identity, and the attribute provider that maintains and asserts characteristics or information associated with the identity.

This idea of entities requiring differing amounts of information (attributes) about each other depending on the context is one of several fundamental characteristics of claims-based identity management, a topic we've weighed in on before. The draft Strategy document embodies many of principles of claims-based identity management, most importantly the user-centric focus of the approach: "The Identity Ecosystem protects anonymous parties by keeping their identity a secret and sharing only the information necessary to complete the transaction. For example, the Identity Ecosystem allows an individual to provide age without releasing birth date, name, address, or other identifying data. At the other end of the spectrum, the Identity Ecosystem supports transactions that require high assurance of a participant’s identity."

As a simple real-world example, when an individual presents a driver's license to a TSA agent at an airport security checkpoint, assuming the license is self is authentic, the agent can assume very little information aside from the name on the license and that at least at some time between the date is was issued and now, the person resided in the state that issued the license, and was at that time a U.S. citizen or legally resident alien. This information is insufficient to determine with any real confidence whether the bearer of the license is a good person or a bad one, whether their intentions are benign or malicious, or generally whether the person is trustworthy. Context is important here too — validating an individual's identity in this manner is sufficient for the TSA's purposes, but would be wholly insufficient for, say, a bank to decide whether to give the person a car loan. Instead, performing a credit check in addition to verifying identity gives the loan officer more information about the financial standing of the individual, which is what the bank is most concerned about, but even with this additional information, it would be a mistake to say the individual has been shown to be trustworthy in any context other than the immediate one. The bank officer might now understand that the individual should have the resources to repay the loan, and some confidence about his or her likelihood to honor commitments to repay debts, but the information presented cannot be used to assert trustworthiness of the individual in the sense of saying he or she won't take the new car and use it as the getaway vehicle in a robbery later that day (even on the same bank!).

It is good to see that the Strategy acknowledges the importance of accrediting identity and attribute providers and relying parties to give parties to transactions with the relying parties some degree of confidence in the identity and authenticity of those entities. However, the explanation of the functions of the governing authority and accrediting authority in the Governance Layer section provide too little detail about the criteria that will be used to accredit entities for particular types of transactions or interactions. With a long history of data breaches resulting from authorized access incorrectly given to entities or through unauthorized actions of entity employees (ChoicePoint, LexisNexis, etc.) it is essential that the accreditation process be sufficiently robust to guard against entities mis-representing themselves in order to receive accreditation, and that accreditation criteria include validation (not self-assertion) of appropriate security and privacy practices. It is only with sufficient rigor supporting accreditation of identity and attribute providers that individuals and relying parties will be able to make some determination of the trustworthiness of entities with which they interact online.

Saturday, June 26, 2010

ONC focus on NHIN governance presents opportunity to establish acceptable security and privacy standards

According to an article yesterday from Government Health IT, the Office of the National Coordinator is getting ready to address a more complete set of rules of behavior and other requirements for participation in the Nationwide Health Information Network (NHIN), and to establish the governance processes and capabilities to manage, monitor, and enforce them. While current participation in the NHIN Exchange is limited to federal agencies and federal contractors or grant recipients, the long-term vision for participation includes a wide range of state, regional, and federal government entities, commercial health care enterprises, and potentially researchers and other relevant organizations. For NHIN Exchange, there is a NHIN Coordinating Committee serving in an oversight capacity, with representation from ONC as well as each active participant, but as the number of participants grows from its current single-digit level of active NHIN-participating entities and the focus shifts from building the NHIN to managing an operational infrastructure, a different sort of model is likely to be needed. Formal governance procedures, not to mention a governing body (ONC personnel and documentation typically use the term “NHIN governing authority”) with fully specified roles and responsibilities, are needed initially facilitate the participation of entities that aren’t necessarily bound by the legal requirements that apply to current participants, to evaluate whether applicants to participate should be able to do so, and to oversee the monitoring of the NHIN that is implied in the Data Use and Reciprocal Support Agreement (DURSA) and other participant agreements. A key topic area that governance rules must address is the set of security and privacy provisions NHIN participants are able to support, including obvious security needs like secure communication, entity authentication and authorization, and audits, but also likely including practices like consent management.

The text of the DURSA provides several examples of expectations or obligations of its signatories that could be translated into an evaluation framework or standard set of criteria by which the security and privacy capabilities of prospective participant could be assessed. Under the DURSA, entities participating in the NHIN are responsible for maintaining the security of their own environments and to apply appropriate safeguards that protect the information the entity sends or receives using the NHIN. The point of reference for "appropriate safeguards" is the HIPAA Security Rule (45 CFR §160 and §164), which seems a sensible source for requirements, at least for participants who are HIPAA-covered entities or business associates. The challenge may be in making a default or standard or minimally acceptable specification of just what safeguards are appropriate, and in trying to apply such a standard uniformly to all organizations. The only category of security controls called out explicitly in the DURSA is protection against malware, a requirement which, much like the general security provisions, is focused on protecting the message contents that will be transmitted using the NHIN and on protecting entities against the possible introduction of security threats into their environments via the ostensibly trusted channel that the NHIN provides. A thornier problem may be the NHIN's approach (viewed through the DURSA) of handling access controls for end users — NHIN Exchange uses digital certificates for entity authentication and authorization (and other purposes), but the certificates are bound to the organizational identity, not to any individual users that might access NHIN-connected systems and initiate queries, requests, or data exchanges. Specifically, section 7 of the DURSA on System Access Policies (in its entirety) stipulates that:
Each Participant shall have policies and procedures in place that govern its Participant Users’ ability to access information on or through the Participant’s System and through the NHIN ("Participant Access Policies"). Each Participant acknowledges that Participant Access Policies will differ among them as a result of differing applicable law and business practices. Each Participant shall be responsible for determining whether and how to respond to a Message based on the application of its Participant Access Policies to the information contained in the assertions that accompany the Message as required by the NHIN Performance and Service Specifications. The Participants agree that each Participant shall comply with the applicable law, this agreement, and the NHIN Performance and Service Specifications in responding to messages.
One way to interpret this language would be that participating entities are required to have access control policies and controls for its own systems, but that a given entity shouldn't expect the policies and controls of another participant to be comparable to its own. Nevertheless, when receiving a message via the NHIN, participants are expect to use their own access control policies to determine whether and how to respond — this despite the fact that it seems highly unlikely that any identifying information about a requester (other than his or her entity affiliation) can be much use in making authorization decisions, since there is no reason to expect users of other participants' systems to be known to the entity that receives the request or other message. The assumption seems to be that authorization of participating entity users is implied by their employment or other relationship to the entity whose certificate validates the assertions in the messages sent via the NHIN. However, for a given participant to have the confidence it needs to accept the validity of the individual initiating a request, it would seem enormously helpful to have some idea either of what access control policies are applied and enforced by the requesting entity, and also how those controls and other security and privacy measures were evaluated by someone in authority with the NHIN in order to decide that the controls were adequate for the intended uses of the NHIN to be supported.

ONC will engage the public and all interested stakeholders in the process of developing NHIN governance rules and capabilities, beginning with a request for information to be issued later this summer, and then through a comment period on a draft rule to be published early next year. From a practical standpoint, some of the areas that will need to be addressed in any governance framework will be functions and processes already in place for NHIN Exchange, but for which formal criteria or standards have not yet been developed. For example, part of the "on-boarding" process for a new applicant is to apply for a digital certificate (this actually occurs twice, as a temporary cert is issued to be used for validation and testing, and then a production version), something that is not supposed to happen until the prospective participant’s application has been received and the participant has been approved for membership in NHIN Exchange. The decision to approve a prospective NHIN participant is a core governance function, but to date this process has been handled on a case-by-case basis, so to scale to a production-capable process, formal governance rules and standards are certainly needed, not to mention decision criteria. There are a number of functional areas ONC is working to support, but most of these also presume the existence of some sort of governing authority. ONC went so far as to issue a request for proposals in late January to award a contract for NHIN Operational and Infrastructure Support, with a variety of tasking that either presume or directly depend on the existence of a NHIN governing authority. These tasks included administering and operating technical infrastructure supporting the NHIN (”infrastructure” in this context means the certificate authority, directories, and network infrastructure), implementing a support center to provide assistance to participating entities throughout the process of joining the NHIN and of participating once they are on board, and creating and maintaining the on-boarding process itself.

To move forward with a larger-scale NHIN that still leverages some of the core features of NHIN Exchange, it is essential for the governance processes and criteria associated with the NHIN (and with ONC, if ONC will own the governance function in the future) to be robust and transparent enough to give entities the confidence they need to participate. With the central governance model and single multi-party legal agreement used to date with the NHIN, participants theoretically have no need to trust each other, as long as they have confidence in the central authority that approves applicants for participation, and in the criteria used to make those approval decisions. This means that the key relationship for each NHIN participant is with the NHIN governing authority, since the NHIN asks participants to set aside their own judgment about other participants, and substitute the NHIN’s judgment instead. Even with a robust governance function in place, this task is likely to prove very challenging, but without effective governance in place, it’s not even feasible.

Friday, June 25, 2010

Agencies receive new guidance, privacy requirements on use of third-party websites

The Office of Management and Budget (OMB) today released a new memo to all heads of executive departments and agencies, "Guidance for Agency Use of Third-Party Websites and Applications," that lays out a set of general principles for the use of such non-agency sites and resources, and specifically sets new requirements for privacy with respect to these external sites. The memo acknowledges the potential value of social media, interactive online tools, and, by implication, Web 2.0 technologies in general, all of which support the spirit of "transparency, public participation, and collaboration" embodied in the administration's Open Government Directive.

The new memo applies to all federal agencies and their use of government or contractor third-party websites or applications used to engage with the public. The general message is, agencies may use third-party sites and applications, but when they do so, they must comply with the new privacy requirements in the memo as well as any existing requirements. General guidance is offered in five areas:
  1. Third-Party Privacy Policies. Before an agency uses any third-party website or application to engage with the public, the agency should examine the third party’s privacy policy to evaluate the risks and determine whether the website or application is appropriate for the agency’s use. In addition, the agency should monitor any changes to the third party’s privacy policy and periodically reassess the risks.
  2. External Links. If an agency posts a link that leads to a third-party website or any other location that is not part of an official government domain, the agency should provide an alert to the visitor, such as a statement adjacent to the link or a “pop-up,” explaining that visitors are being directed to a non-government website that may have different privacy policies from those of the agency’s official website.
  3. Embedded Applications. If an agency incorporates or embeds a third-party application on its website or any other official government domain, the agency should take the necessary steps to disclose the third party’s involvement and describe the agency’s activities in its Privacy Policy.
  4. Agency Branding. In general, when an agency uses a third-party website or application that is not part of an official government domain, the agency should apply appropriate branding to distinguish the agency’s activities from those of non-government actors. For example, to the extent practicable, an agency should add its seal or emblem to its profile page on a social media website to indicate that it is an official agency presence.
  5. Information Collection. If information is collected through an agency’s use of a third-party website or application, the agency should collect only the information “necessary for the proper performance of agency functions and which has practical utility.” [Following a government requirement from OMB Circular A-130] If personally identifiable information (PII) is collected, the agency should collect only the minimum necessary to accomplish a purpose required by statute, regulation, or executive order.
From a privacy perspective, the June 25 memo reminds agencies of their continuing obligations under the Privacy Act, and updates previous guidance issued to agencies on federal website privacy policies and on implementing the privacy provisions (largely in Title II, but including some portions of FISMA too) of the E-Government Act of 2002. Among the most significant new requirements is the need for agencies to perform an adapted Privacy Impact Assessment (PIA) for third-party websites; update their privacy policies to make sure they provide information about the use of third-party sites and applications; and post privacy notices on the third-party sites noting the agency's association with the site, but also clearly stating that the third-party sites is not owned or controlled by the government.

Government first-movers looking to get a jump on continuous monitoring

With new federal agency FISMA reporting requirements taking effect in November, several agencies are taking steps now to get ahead of the requirements and anticipate some additional security metrics likely to be added in the near future. As reported by Federal News Radio, the Department of Veterans Affairs expects to have monitoring capabilities in place for all desktop computers by September 30, in addition to ongoing efforts to augment network, server, and systems monitoring capabilities. In a widely reported shift in policy and practice, NASA announced its intention to abandon conventional system re-authorization processes in favor of focusing on the new reporting requirements.  In addition, the Nuclear Regulatory Commission is evaluating its current tools and monitoring functions to try to determine how to meet the new monitoring requirements. As these and other agencies explore alternative methods and mechanisms for meeting new monitoring requirements, many look to the State Department's risk scorecard model, which draws data from vulnerability scans, configuration checks, and network management sensors to produce and frequently update an overall score for State's security posture.

Instructions in a memo sent on April 21 from OMB to all heads of executive departments and agencies gave notice about the new FISMA reporting approach, which in addition to requiring electronic submission of data feeds from agency FISMA tools to the government-wide Cyberscope online application, also will involve the establishment of government-wide benchmarks on security and agency-specific interviews with officials responsible for security management. Should the administration's Cybersecurity Coordinator be given budgetary approval authority over agency investments — as proposed in several pieces of security legislation introduced in Congress — these benchmarks may take center stage as agencies not only report on systems security, but also try to justify the effectiveness their information security management programs. Continuous monitoring is among the many new provisions called for in the House of Representative's proposed Federal Information Security Amendments (FISA) Act that were included via amendment in the defense authorization bill the House passed on May 28, and is a core process in the revised Risk Management Framework and system certification and accreditation process detailed in NIST Special Publication 800-37 Rev. 1.

Tuesday, June 22, 2010

Clearer definitions and roles for intermediaries would facilitate policy decisions on directed exchange of health data

During the latest meeting of the Health IT Privacy and Security tiger team today, the bulk of the discussion centered around draft recommendations for message handling in what the group calls "directed exchange" in which health care providers send data on patients to other providers in a point-to-point communication. This type of exchange comes into play in many of the use cases intended to be satisfied by the NHIN Direct pilot project, so part of the discussion focused on the appropriate policy declarations that should be made about directed exchanges of health data, including among NHIN Direct participants. One of the primary concerns of the group is establishing the right policy and technical mechanisms to minimize the exposure of protected health information (PHI) send as part of these directed exchanges.

To guide this discussion, the draft recommendations identify four categories of exchange, differentiated by presence or absence of an "intermediary" handlers of the messages in the exchange and the role of such intermediaries where they are involved, with particular scrutiny on how much access the intermediaries would have to PHI within the message contents. The four categories the tiger team is using are:
  1. No intermediary involved (exchange is direct from point A to point B).
  2. Intermediary only performs routing and has no access to unencrypted personal health information (PHI) (message body is not unencrypted and routing information does not identify patient).
  3. Intermediary has access to unencrypted PHI (i.e., patient is identified) but does not change the message body, either the format or the data.
  4. Intermediary opens message and changes the message body (format and/or data).
Not far into this point on the agenda, the discussion devolved a bit into a debate on just what an "intermediary" is, and whether it would be good policy to state that no intermediary should have access to PHI at all under directed exchanges of health information. Clearly no consensus exists among the group on the definition of intermediary, but from the perspective of looking at solution elements proposed for NHIN Direct, a key question (not yet definitively answered) is whether a service used by a doctor, such as a hosted EHR or message packaging (including encryption) and delivery by a health information service provider (HISP) introduces an intermediary into the equation. It might simplify the discussion on both the technology, policy, and legal compliance fronts to just assume that all exchanges involve intermediaries, and focus on whether existing legal requirements such as those in the HIPAA Security and Privacy Rules would apply to those intermediaries, and therefore mitigate at least some of the concerns about the ability to see or process PHI.

For the sake of argument, it might make sense to say that when your data passes from one organizational entity to another, the involvement of any entity other than the sender and receiver means there is an intermediary. With respect to the basic categories of message handling for directed exchange, it seems to make little sense to include the first category ("no intermediary") at all if we are talking about data exchange over the Internet or other public or common carrier-furnished communications infrastructure. Perhaps the no-intermediary case would apply with a secure domain (in the IHE sense of the term) such as an organizational LAN or WAN or company-owned private network. We would argue that neither server-to-server nor desktop-to-desktop secure communication channels (such as TLS) really remove intermediaries (ISPs, backbone infrastructure providers) from the communication, but with such a secure channel in use there should be no concerns about the intermediaries getting any access to the data — including PHI — that flows across the connection. If we can agree that for all intents and purposes, there is always some sort of intermediary involved, then we can shift the discussion where it ought to be — to the extent to which intermediaries can access data in the messages, especially if they contain PHI.

The "B" option in the list above is a pretty standard use case for mutually-authenticated point-to-point communication channels such as TLS, but could also apply to unsecured channels where message (payload) encryption was involved. This distinction is important insofar as directed exchange using SMTP is intended to be an option. The "C" option is similar to the second one, but instead of encrypting or otherwise packaging the message at the point of origin, here the intermediary (such as a HISP) performs encryption and decryption processing on behalf of the parties to the exchange. This option is favored by some working on NHIN Direct to save providers from having to install and use encryption technology locally, and also to help simplify digital certificate management by using the HISPs as the boundary for public key infrastructure established to enable secure, authenticated exchange. In this third category it seems logical that the intermediary would fit the definition of a business associate under HIPAA, as the intermediary would be an"entity that performs certain functions or activities that involve the use or disclosure of protected health information on behalf of, or provides services to, a covered entity." To be fair, nothing in the legal definition under HIPAA explicitly includes functions like encryption or message routing, but does include functions such as "data analysis, processing, or administration" and other generic services such as data aggregation, management, administrative, and accreditation services (45 CFR §160.103). Covered entities were already responsible for the compliance of their business associated with HIPAA safeguards, and under HITECH the HIPAA security and privacy rules apply directly to business associates, so even the temporary exposure of such intermediaries to the contents of messages (before the contents are encrypted for transmission) should not raise any special privacy concerns unless the parties believe that constraints applicable to business associates are insufficiently robust.

A similar logic applies to the "D" option, although in this case because the intermediary is explicitly processing the contents of the message, the intermediary would be considered a health care clearinghouse and therefore a HIPAA-covered entity (potentially in addition to being a business associate of the parties to the exchange). This means it would be in the intermediary's own interest to guard against unauthorized disclosure, as the full set of HIPAA requirements apply when it accesses, changes, or otherwise discloses PHI. In recent weeks, the Health IT Policy Committee's Privacy and Security Policy Workgroup has recommended policies in other contexts (notably including consent) that would result in the need for no additional protective measures beyond what is required by current law. If the definition and role of "intermediary" in the various directed exchange patterns was more clearly specified, it would be easier to identify areas of security or privacy concern that are already addressed by current legal requirements, and also to highlight any gaps that exist that might demand new policy statements.

Monday, June 21, 2010

Data encryption for HIE sounds obvious; not so simple to implement

One of the early themes that has emerged from the initial discussions of the Office of the National Coordinator's privacy and security tiger team is the need for stronger protection of the confidentiality and privacy of health data exchanged between entities — whether in a point-to-point exchange model such as NHIN Direct's or a multiparty exchange environment such as NHIN Exchange — and the call for the use of content encryption to afford that protection. This near-consensus recommendation follows from the recent work of the Health IT Policy Committee and its Privacy and Security Policy Workgroup, which resulted in recommendations for encryption of patient data whenever it is exchanged. (Side note: The tiger team was organized as a workgroup under the Policy Committee, although its membership includes people from the Health IT Standards Panel and the National Committee on Vital and Health Statistics (NCVHS); it is co-chaired by Deven McGraw and Paul Egerman, both of whom are Policy Committee members who lead standing workgroups.) The emphasis in this recommendation and something of a departure from past precedent is on encryption of the contents or payload of messages exchanged using health information exchange (HIE), alone or in addition to the transport-level encryption (such as SSL or TLS) that is already specified for secure exchange among NHIN participants. During the May 19 Policy Committee meeting, McGraw presented a set of recommendations from the Privacy and Security Workgroup that were considered from the perspective of what a reasonable patient would expect. One area of emphasis on which the Privacy and Security Policy Workgroup is continuing discussion is patient consent, but tiger team latched on to content encryption when it began discussing ways to maintain privacy when electronic health records or other protected health information is exchanged electronically.

The tiger team recognizes that even when health information exchange is limited to a transmission between two entities, there may be several ways of technically enabling the communication, many of which involve the use of intermediaries such as health information service providers (HISPs) that, depending on the nature of the exchange, may or may not have a need to examine the messages flowing through them on their way from sender to receiver. To the extent that such intermediaries may be performing functions outside the realm of what would put them under the HIPAA definition of business associate, the Health IT Policy Committee and the tiger team members are concerned that current legal requirements for the protection of health data may not apply to the intermediaries. One way to mitigate such concerns is to render the data unreadable to intermediaries, which in general means encrypting it. The discussion this month has been informed by the work on the NHIN Direct project (and participation in the tiger team meetings by some NHIN Direct team members), which has raised the issues of end-to-end content encryption and separating the potentially PHI-containing message content from the header data or other metadata needed to successfully route the message to its destination. There remains some debate as to whether such content encryption should be a mandatory requirement, or should remain "addressable" as it is under the HIPAA Security Rule.

One argument in favor of mandating the use of encryption is the technical feasibility of such an approach. By applying Web Services Security standards, particularly including SOAP Message Security, solution developers and implementers have a lot of flexibility to separate and separately protect message contents from message envelope information. The real challenge lies not in separating routing data from payload, or from enabling content (or full-message) encryption, but instead in what encryption model should be used in order to make encryption possible without imposing barriers to interoperability. Perhaps obviously, there is no value in encrypting data in transit if the recipient cannot decrypt the message, but the sort of public key infrastructure used for NHIN Exchange is not necessarily a viable approach for a solution like NHIN Direct. The use of digital certificates for encryption in health information exchange has been recommended for NHIN Direct, but because NHIN Direct will not rely on a central certificate authority, there will need to be provisions for managing and evaluating certificates from multiple issuers potentially representing different "trust domains" to which a given exchange participant might belong.

As the NHIN Direct members have discussed in the past, there are ways to do this without full-scale PKI and all of the key distribution and management overhead that comes with such an infrastructure. That potential aside, no one should underestimate the significance of the tasks of establishing, managing, and overseeing the certificates and supporting services necessary to facilitate end user encryption and decryption among health information exchange participants (to say nothing of integrating such capabilities into end user electronic health record systems, transaction gateways, web services, SMTP clients, or other messaging tools). It certainly helps that multiple technical alternatives are incorporated within available open standards and that many health IT product vendors support these standards, but there is a great deal of additional processing and management required to accommodate pervasive use of content encryption. The complexity of such a solution may explain why, to date, only the transport-level encryption is used for the NHIN, and the only encryption used within the payload is the digital signing of SAML assertions included within the SOAP messages exchanged via the NHIN. The use cases envisioned for NHIN Direct are different than those for the NHIN in general, particularly with respect to transport encryption, with is required for the NHIN but may not be in place for all possible transport mechanisms that might be supported by NHIN Direct.

Thursday, June 17, 2010

Supreme Court rules search of police officer's text messages legal, opts not to try to resolve reasonable expectation of privacy issue

The U.S. Supreme Court handed down a unanimous ruling in Ontario v. Quon, reversing the 9th Circuit Court of Appeals and finding that the City of Ontario (Calif.) Police Department (OPD) did not violate the 4th Amendment rights of one of its officers when it reviewed the contents of personal text messages he had sent using his city-issued pager. Despite anticipations before the case was argued that the Court would try to resolve the disputed issue of whether Quon had a reasonable expectation of privacy with respect to his text messages, the justices determined that they didn't need to resolve that issue to reach a conclusion in the case, and based their decision on a determination that irrespective of the employee's expectation of privacy, the review of his text messages constituted a legal search under the 4th Amendment, relying in particular on the precedents from the plurality and concurring opinions in the 1987 case O'Connor v. Ortega.While prevailing 4th Amendment doctrine maintains that warrantless searches are unreasonable, under O'Connor the Court recognized that the "special needs" of the workplace justify an exception for non-investigatory, work-related purposes or for investigations of work-related misconduct. Interestingly, while Quon was allegedly disciplined as a consequence of the OPD's review of his text message transcripts, the city never suggested Quon's actions rose to the level of misconduct, and justified its search on the grounds that it sought to determine whether the volume limits on the text messaging pager subscriptions were too low and might be causing overage fees for work-related communications.

The Court tried to put its own constraints on the scope of its ruling in this case, apparently believing that the rapidly pace of technological change makes it unwise to establish precedents based on a single type of device or communications medium. Instead, Kennedy writes, "It is preferable to dispose of this case on narrower grounds." To limit its legal analysis to the reasonableness of the search that occurred when the OPD reviewed Quon's text message transcripts, the Court accepted three propositions for the sake of argument: 1) Quon had a reasonable expectation of privacy with respect to the text messages he sent; 2) his supervisors' review of the message contents constituted a search under the Fourth Amendment; and 3) the principles ordinarily applied (from O'Connor) to a government agency's search of its employees' physical office also apply when the employer searches in an electronic environment.

Because the reasonableness principle stems from the O'Connor precedent, which says the reasonable expectation of privacy must be addressed on a case-by-case basis, the finding by the lower courts that Quon did have such an expectation (taken as an assumption by the Supreme Court for this case) cannot practically be considered to establish a general principle about text messages and privacy in government agency environments, much less workplace environments generally. In O'Connor, Justice Scalia offered a somewhat simpler standard for determining reasonableness in government-as-employer contexts, under which government workplaces would be covered by the 4th Amendment as a rule, but searches involving work-related materials or to investigate violations of workplace rules would be considered reasonable (as they are in private employer settings) and would therefore not violate the 4th Amendment. Either perspective would yield the same net conclusion in Ontario v. Quon, but in a separate concurring opinion, Scalia took the majority to task for including what is essentially a side discussion on the reasonable expectation of privacy question, since the Court notes repeatedly that resolving that issue was not necessary to decide the case before the Court. Scalia maintains his disagreement with the reasonableness approach the plurality proposed in O'Connor, saying "the proper threshold inquiry should be not whether the Fourth Amendment applies to messages on public employees’ employer-issued pagers, but whether it applies in general to such messages on employer-issued pagers."

Although it wasn't unexpected, the narrow ruling by the Court limited to the particular facts of the situation and the reasonableness of the specific search involved mean we can only speculate about what conclusions might have been drawn had the Court followed through with some of the reasoning it described. The OPD had a computer usage, internet and email policy in place, which explicitly stated that users should have no expectation of privacy or confidentiality when using the city's computers. OPD personnel had repeatedly expressed their position that text messages were to be treated the same as emails. In arguing the original case, there was some debate as to whether Quon's supervisor's statements that he did not intend to audit the text messages somehow overruled the official policy. The Court notes this disagreement without making any determinations regarding this matter. Justice Kennedy's majority opinion also makes some important distinctions between text messages and emails, but doesn't say whether these differences would prevent the city from applying its formal written computer policy to text messages, which are not explicitly mentioned in the policy. The key difference is the fact that while OPD emails are communicated using government servers, the text messages are not, passing instead through the communications infrastructure of the service provider (Arch Wireless). It might be interesting to see how the court would apply this line of reasoning had the city owned and operated the text messaging infrastructure, or if the communications at issue involved outsourced email services hosted by a third party.

Before hearing the case in April, the court denied cert to Arch Wireless's appeal of the 9th Circuit's ruling that it had violated the Stored Communications Act by turning over the contents of the text messages to the city when asked to do so. Given that the city was the subscriber of record for all the wireless pager accounts, it might have been interesting to see how the Court viewed that argument, but the issue was not taken up by the court, and was noted only in the context that legal precedent does not make the city's search unreasonable even if the transcripts should not have been provided to them.

Without diminishing the importance or potential future significance of any of the above issues, the big unanswered question remains what reasonable expectations of privacy should public or private sector employers have in their personal communications, particularly when using employer-provided means of communication. The majority opinion made mention of the disagreement over privacy expectations and then devoted nearly as much space to justifying why the Court opted not to address this issue in its ruling: "The Court must proceed with care when considering the whole concept of privacy expectations in communications made on electronic equipment owned by a government employer. The judiciary risks error by elaborating too fully on the Fourth Amendment implications of emerging technology before its role in society has become clear." Justice Scalia voiced concerns in his concurring opinion that future litigants would try to use Quon's case to justify claims of reasonable expectations of privacy, despite the explicit warning in the majority opinion: "Prudence counsels caution before the facts in the instant case are used to establish far-reaching premises that define the existence, and extent, of privacy expectations enjoyed by employees when using employer-provided communication devices."

Even assuming a reasonable expectation of privacy existed, which the court did for the sake of argument, the Court noted that given what Quon and his fellow officers had been told about the city's perspective that text messages were considered the same as email, Quon couldn't claim immunity from auditing in all circumstances. This seems to suggest that even where a legal expectation of privacy is established, such an expectation is not without limits. Justice Stevens, writing in a concurring opinion, said that Quon "should have understood that all of his work-related actions — including all of his communications on his official pager — were likely to be subject to public and legal scrutiny. He therefore had only a limited expectation of privacy in relation to this particular audit of his pager messages."

Wednesday, June 16, 2010

Prosecution of Maryland motorcyclist who recorded his traffic stop hinges on "reasonable expectation of privacy"

As reported in today's Washington Post, a Maryland motorcyclist who used his helmet-mounted video camera to record the state trooper who stopped him and ticketed him for speeding, and then posted the video on YouTube, now faces criminal charges under the state's wiretapping laws. Maryland is one of several states whose laws require that both (or all) parties consent before a conversation can legally be recorded, a stipulation that can only be waived when a non-consenting party is deemed to have no reasonable expectation of privacy. In this case, it would seem the key question is whether a state law enforcement officer, making a traffic stop in public, and reasonably expect that whatever he says will be private. According to the article, Maryland's wiretapping law does not cover video, so only the audio portion of the recording is at issue, but there seems to be a growing trend to make it illegal to film police while on duty. This trend is troublesome on many levels, not the least of which is the power imbalance that exists between law enforcement personnel and members of the public, to which (as Bruce Schneier noted eloquently more than two years ago) an appropriate response would be not to increase the transparency of government actions, not put laws in place to shield them.

With the case of the Maryland motorcyclist, the treatment of the recording as an instance of illegal wiretapping raises the "reasonable expectation" principle in yet another context. In recent months this idea had been debated and argued in court cases involving employee expectations of privacy in the workplace, particular where employees use employer-provided computers or communications equipment to transmit personal communications. Among the highest profile of these cases was City of Ontario v. Quon, argued before the Supreme Court in April, that involved personal messages sent by a police officer using his city-government-issued pager. The ruling in that case, issued this week, assumed that Quon did in fact have a reasonable expectation of privacy for the contents of his text messages, but did not decide the issue more broadly than in the specific case. If a government authority can make the argument that person-to-person communications made by an officer while on duty should not be presumed to be private, it is hard to reconcile how verbal communication (allegedly including shouting) uttered on the side of an interstate highway could be considered any less public.

Tuesday, June 15, 2010

In letter to Congress, Google says wireless data collection wasn't the right thing to do, but didn't break any laws

In response to a request from Congressmen Henry Waxman, Joe Barton, and Edward Markey to Google CEO Eric Schmidt seeking information about the collection of wireless network traffic by the company during the operation of its Street View program, Google's Director of Public Policy Pablo Chavez sent the company's reply in a letter dated June 9. In the letter, Chavez repeats the company's assertions that it never intended to capture or use payload data in the wireless traffic it gathered from unsecured wireless hotspots, and apologizes for doing so. In response to a specific question posed to Google asking about the company's view of the applicability of consumer privacy laws to the situation, Chavez said that Google does not believe that collecting payload data from such networks violates any laws, because the wireless access points in question were not configured with any encryption or other privacy features and were therefore publicly accessible. This response seems to be indirectly referencing a provision in the Electronic Communications Privacy Act (ECPA) that offers an exception to the general prohibition on the interception of electronic communications, if the interception is "made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public" (18 U.S.C. §2511(2)(g)(1)). The law defines "readily accessible to the general public" (18 U.S.C. §2510(16))with respect to radio communication to mean that the communication transmission is not scrambled, encrypted, or modulated in such a way that preserves privacy, so it would seem to be a valid legal interpretation to assert that private citizens who deploy unsecured wireless access points in their homes are actually establishing public electronic communications services. The law also only prohibits intentional interception and disclosure of electronic communications, so even if Google were overruled on its characterization of wi-fi hotspots as public services, its repeated claim that it never intended to capture payload data might give it another escape clause from ECPA.

There are, however, a couple of aspects of these interpretations that don't sit quite right. Among the most obvious is the fact that the ECPA was enacted before the advent of wireless networking — its passage predates the IEEE's 1997 release of the first 802.11 protocol by more than a decade. In recent months a wide range of technology firms, consumer advocacy groups, and members of Congress have argued that the ECPA is long overdue for revision to bring it more in line with modern communications technology. Google in its own public statements has emphasized the public accessibility of wireless networks, and if the data collection in question had been limited to packet captures on free municipal wireless networks or free wi-fi provided at cafes and coffee shops all over the place, there might be a lot less debate and a smaller number of lawsuits, both here and abroad. When the wireless interception involves traffic transmitted within a private home or business, however, the fact that the technical capability exists to allow someone to receive the radio signal transmissions from outside the home or business may not be sufficient by itself to make the transmissions "public." There is a different portion of the U.S. code (18 U.S.C. §1029(a)(8)) that makes it makes it a crime to knowingly use or even possess a scanning receiver capable of intercepting electronic communications if the intent of such interception is to defraud. Various state laws also prohibit either eavesdropping alone, or eavesdropping and subsequent disclosure of cordless or cellular telephone communications, despite the fact that the technology to listen in on such devices is widely available. In Google's case, it maintains that it neither wanted the data it captured nor had any intended use for it, so there's little to suggest it intended to disclose anything other than the location of the wireless access points it found, and certainly no evidence that the company intended to defraud anyone. Still, the law is not so straightforward as some might suggest when it comes to the legality of wireless data interception, especially when considering state-level laws, and it may take a formal court ruling to clarify exactly what, if any, constraints might be placed on the concept of "generally accessible to the public."

Saturday, June 12, 2010

Contrasting trust models under development for NHIN and NHIN Direct

The Nationwide Health Information Network (NHIN), a government-sponsored initiative started in 2004 and re-emphasized in the Health Information Technology for Clinical and Economic Health (HITECH) Act, is no longer envisioned as a "network" at all (in the infrastructure sense), but instead as a collection of standards, services, and policies that collectively support the secure exchange of health information between participating entities. The original idea for the NHIN was that public and private sector organizations would benefit from adopting a common set of parameters governing their health data exchanges, and that once a few early adopters went into production using the NHIN, participation would grow rapidly. Instead, due in part to disagreements among different types of potential participants about how NHIN standards should be implemented, and also to concerns about policy incompatibilities between federal and commercial sector entities, there are currently very few organizations in production. The group of state and federal government agencies and a small number of commercial health care entities currently operating health information exchanges using the NHIN are collectively referred to as NHIN Exchange; this exchange is focused on the data exchange needs of federal agencies, to the degree that non-federal participants must join through a federally-sponsored contract. The NHIN has in general been focused on enabling health information exchanges between large organizations, but addressing the data exchange needs of small providers has received greater attention due to the recent focus on meaningful use measures that eligible health care providers must satisfy in order to qualify for financial incentives to acquire and implement electronic health record technology. A core requirement for showing meaningful use is that providers' EHR technology must be implemented in a way that enables "electronic exchange of health information to improve the quality of health care" (Meaningful Use Notice of Proposed Rulemaking, 75 Fed. Reg. 1850 (January 13, 2010)). In order to enable secure health information exchange among smaller providers, the NHIN Direct project began earlier this year, specifically intended to look to use or expand upon NHIN standards and services to "allow organizations to deliver simple, direct, secure and scalable transport of health information over the Internet between known participants in support of Stage 1 meaningful use."

Without delving into the details of all the standards and services and use cases that the NHIN and NHIN Direct are seeking to support, one very noticable difference between the two initiatives is in the area of trust. Participants working on both initiatives agree that trust is an essential aspect of any solution, because health care entities — large or small — are not expected to participate in any health information exchange unless they feel they can trust the other participants and any third parties involved in operating or managing or overseeing the exchange. While everyone seems to agree that such trust is important, the approach each initiative is taking with respect to trust is quite different. In particular, the basic trust model proposed for NHIN Direct is much more explicit than the trust framework being developed for the NHIN in terms of what "trust" actually means in a health information exchange context, and on the extent to which participants involved in a multi-party exchange can agree on policies, standards, and controls intended to support trust. Both programs tend to use the word "trust" incorrectly, as the results sought from their trust models and frameworks include confidence, reliability, assurance, or even surety but don't really even begin to address establishing the trustworthiness of a given entity that would help another decide to accept the risk of engaging in an exchange with the other based on expectations about how the trusted entity will behave. This may be due to implicit assumptions about the interests of different would-be participants in health information exchanges, or because insufficient weight is given to the manner in which participants can establish their trustworthiness, or perhaps too little attention is focused on the very real distrust that exists between potential HIE participants.

To its credit, the NHIN Direct project candidly acknolwedges that different policies and assumptions will apply to different participants in different contexts, so the NHIN Direct basic trust model limits the scope of what any assertion of trust actually covers, and allows for the possibility (even the expectation) that a given organization may participate in multiple exchanges governed by different sets of policies or rules. The NHIN Direct approach has no central authority to assert trustworthiness of participants, and no trust-by-default among participants. NHIN Direct participants are expected (if not quite obligated) to make their own determinations about the relative trustworthiness of others. The NHIN Direct Security and Trust Workgroup's keys for consensus summary addresses "only the level of trust necessary to establish confidence that the transmitted message will faithfully be delivered to the recipient, not that the two parties trust or should trust each other; this definition of trust is to be defined by source and endpoint out of band, and may be facilitated by entities external to the NHIN Direct specifications."

By contrast, the NHIN Exchange in particular and the NHIN trust framework in general relies on a central (or root) authority that makes determinations of trustworthiness for all potential participants, and presumably only allows participation by trustworthy entities. There is not currently a standard set of criteria to serve as the basis for determining trustworthiness, but when and if such criteria exist, they are expected to address at least the minimum technical requirements a participant must satisfy, along with providing identity assurance, and articulating the business, policy, legal, and regulatory requirements that apply to participants. The health information exchange trust framework recommended in April by the Health IT Policy Committee's NHIN Workgroup comprised five key components:
  1. Agreed Upon Business, Policy and Legal Requirements / Expectations
  2. Transparent Oversight
  3. Enforcement and Accountability
  4. Identity Assurance
  5. Minimum Technical Requirements
NHIN participants sign a legal document called the Data Use and Reciprocal Support Agreement (DURSA) which is intended to serve as a master trust agreement applying the same permissions, obligations, expectations, and constraints to all exchange participants in all of the information exchange contexts it covers (treatment, payment, health care operations, public health activities, reporting on clinical quality measures, and other uses authorized by individuals to whom the data pertains). By executing the DURSA, participants don't actually agree to trust each other, but they do agree to acknowledge and accept that different participants may have different policies, practices, and security controls such as system access policies. This means that a participant must rely on the determination of the NHIN governing authority (who approved applicants for participation) that the policies and controls used by an approved participant are sufficiently robust, and gives participants no real ability to question the approach that another participant takes to things like security. The reliance on a legal contract (the DURSA) and a planned monitoring, oversight, and enforcement function strongly suggests that what the NHIN has produced is a distrust framework, rather than one based on trust. While that might not sound as nice, if the scope of participation for the NHIN continues to include many different types of participating entities, many of which may have conflicting organizational interests, a common level of trust may never be established, so an approach designed to achieve cooperation despite distrust may be precisely what's needed.

The intent to use a single overarching trust model for the NHIN is based on assumptions of feasibility:  if NHIN participants someday number in the hundreds or even thousands, negotiating trust between pairs or among small sub-sets of all those participants just isn't practical. By positioning a common, trusted authority in the center, all that should be required to achieve trust throughout the NHIN is for each participating entity to establish a trust relationship with the NHIN governing authority (which at present means with the NHIN Coordinating Committee within the Office of the National Coordinator, but its governance role is considered interim pending the formalization of a permanent NHIN governing authority). It's not entirely clear how such bilateral trust agreements can be made with the many different organizational interests represented by the different types of organizations (providers, insurers, researchers, agencies) that might seek to participate in the NHIN, to say nothing of the interests of the patients whose data would be exchanged by those entities. It does seem logical that working through a central agent — either a vested organization like ONC or a neutral network facilitator — would have better success in negotiating trust than if all the participants tried to reach consensus on a multilateral agreements. However, given the significant time and energy that many people have put into thinking about and trying to resolve issues like harmonizing the security and privacy requirements that apply to federal and private sector entities, both categories of which may or may not be covered by HIPAA, it is also understandable why the NHIN Direct Security and Trust Workgroup declared that "real world evidence suggests that achieving global trust is not practical." While NHIN Direct is not primarily intended to effect changes in the approach or structure of the broader NHIN, it would be nice to see the development of the trust framework currently under consideration within the Health IT Policy Committee take some practical guidance on trust from NHIN Direct.

Friday, June 11, 2010

Privacy settings do matter: subpoenas quashed for disclosure of social networking data

In a recent federal district court ruling noted and summarized by the always-astute privacy team at law firm Hunton & Williams, an individual user of Facebook, MySpace, and other less well known online communities, who is also a plaintiff in a copyright infringement lawsuit, successfully quashed a subpoena by the defendants in his case that sought to obtain private messages he had sent through the social networking sites. Lawyers for the plaintiff argued that the subpoenas were overbroad, that the information they sought was irrelevant to the case, and that the social networking companies' disclosure sought in the subpoenas is prohibited under the Stored Communications Act (18 U.S.C. §121), which among other provisions says that "a person or entity providing an electronic communication service to the public shall not knowingly divulge to any person or entity the contents of a communication while in electronic storage by that service" (§2702(a)(1)). The magistrate that first considered the motion originally rejected the argument under the SCA (and accepted only the claim that the subpoenas were overbroad, since they sought all of plaintiff's communications on the sites). Not satisfied, the plaintiff moved for reconsideration of the magistrate judge's ruling on the motion to quash the subpoenas, and the district court accepted plaintiff's argument that the private messaging capability provided by sites like Facebook and MySpace are in fact electronic communication services under the definition in the law, and quashed the portions of the subpoenas concerning disclosure of the messages the plaintiff sent through the sites.

Still unresolved is whether the plaintiff's comments and wall posts can similarly be considered as private communications, since they are more or less intended to be public content, at least "public" within the context of the online sites in question. The Stored Communications Act prohibitions on disclosure do not apply to "electronic communication made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public" according to a clause in a different part of the Electronic Communications Privacy Act of 1986 (18 U.S.C. §2511(2)(g)(i)). The district court directed the parties to the suit to produce detailed information about the plaintiff's privacy settings, and in so doing provide some indication of whether he intended his posts and comments to be publicly viewable. The implication is clear, at least from a personal privacy perspective: if you want any of your activity on social networking sites to be considered private in a legal context, you should configure the privacy settings made available by the site in such a way that conveys your intent to limit the disclosure of the information. If you make your personal information public, even within the confines of a social networking community, then the courts may consider that decision as contrary to any later assertion that you wanted the information to be private.

Thursday, June 10, 2010

Senators propose law banning pre-paid cell phones

In a move ostensibly intended to aid anti-terrorism efforts, Senators Charles Schumer and John Cornyn issued a joint press release two weeks ago announcing proposed legislation that would essentially end the anonymity of pre-paid cell phones by requiring buyers to present identification when purchasing one, and phone companies to maintain a record of buyers' information. This is merely the latest strong reaction to the context surrounding the Times Square bomber, who used a "disposable" cell phone to, among other things, to call Pakistan prior to the bombing attempt and to arrange to buy the vehicle that he used to plant the explosives in his failed attempt to set off a car bomb in New York City. The proposed senate legislation would be the first federal attempt to require registration of pre-paid cell phone purchasers, although several states are already considering such rules. Schumer and Cronyn acknowledge that the vast majority of pre-paid cell phone users for law-abiding purposes, the fact that they are popular among criminals is sufficient reason in their opinion to prohibit anonymous use. This is an interesting line of thinking, as it's not at all clear how even a criminal's use of a cell phone would itself be an illegal act, and it seems a stretch to try to put a cell phone in the category of a weapon like a handgun, explosives, or other products already subject to buyer identification and purchase record-keeping requirements. Public reaction to the proposal, from all political perspectives, pretty unanimously points out the obvious infringement on civil liberties and individual privacy (which commentators such as Bob Barr attribute as a defining characteristic of the 111th Congress).

This proposed action is consistent with a long history of precedents where the government seeks information on a large body of individuals and their transactions or communications in the name of law enforcement (and, in this case, national security). Efforts by the U.S. government to restrict the strength of encryption used in exported products were generally ruled unconstitutional in 1997, but remain in place for exports of some product types to some countries, under a program administered by the Bureau of Industry and Security (BIS), part of the Department of Commerce. Encryption — and more specifically its use to protect the privacy of data and communications — is perhaps the most prevalent contemporary example of a technology that can be used just as effectively to hide criminal behavior as it can to protect legitimate users. Governments in many countries, not just the U.S., have struggled to find the right balance point between individual and national interests, but in the post 9/11 era, both the former and current U.S. administrations seem quite willing to restrict the civil liberties of the many to try to avoid missing the threatening actions or intentions of a few. We touched on this sort of bias in the aftermath of another terrorist near-miss last Christmas; the desire to avoid a successful terrorist attack is certainly strong enough to warrant proposals like the one from Schumer and Cronyn, and may just be strong enough to override personal privacy considerations in the name of homeland security.

Tuesday, June 8, 2010

Recent anti-fraud success, health reform law provisions show government, insurers getting serious about health care fraud

Reports released within the past month by both government health authorities and health insurers highlight recent successes in combating health care fraud and saving or recovering substantial amounts of money. A Blue Cross and Blue Shield Association study reporting anti-fraud efforts in the past year found that investigations by the Association's member companies yielded over $500 million, a nearly 50 percent increase compared with 2008. For it's part, the Department of Health and Human Services announced over $2.5 billion recovered through last year's health care fraud efforts, in addition to $441 million recovered from Medicaid through similar anti-fraud programs. Both of these were significant increases from prior year results, and future prospects appear even brighter, due to new provisions and additional funding related to fraud prevention included with the Patient Protection and Affordable Care Act (the recently enacted health reform legislation). Both government and industry efforts to combat fraud are taking a multi-pronged approach, including more education and training to health care staff and individual citizens to make them more aware of scams or other potentially fraudulent activities. There also seems to be a significant emphasis on applying analytical tools and anti-fraud technologies, and to use those tools earlier in the health care claims process, to catch fraud before payment is made (prevention works better than after-the-fact recovery). Overall, the attention to detecting and preventing fraud reflects a widespread industry shift in focus, away from a single-minded prioritization of efficient claims handling in favor of a blended approach that incorporates anti-fraud activities in the core process. This change has been a long time in coming, as many of the core process deficiencies that facilitate health care fraud have been publicized for years, perhaps best articulated by Harvard's Malcolm Sparrow in his authoritative work on the subject, License to Steal.

Monday, June 7, 2010

Privacy breach lawsuits repeatedly dismissed where harm cannot be proven

A recent ruling by the 9th Circuit Court of Appeals is the latest in a series of cases where individuals whose personal information was involved in a data breach were unable to successfully pursuit causes of action due to the lack of actual harm suffered by the data breach victims. In this case, Ruiz v. Gap, Inc., the plaintiff had submitted personal information as part of an online employment application to Gap. Two laptops belonging to Vangent (a contractor providing job application processing services to Gap) were stolen from the contractor's offices. The laptops contained data on some 750,000 Gap job applicants, including Ruiz, and he filed a lawsuit in California against Gap and Vangent alleging negligence, breach of contract, and various other California regulations. The Northern District Court granted defendants' motion for summary judgment and rejected Ruiz's claims, noting that while the potential future harm he faced, such as increased risk of identity theft, was sufficient to give standing to sue, the lack of proof of any actual injury due to the theft of his personal information meant the case failed to meet the standard of appreciable harm necessary to bring a cause of action for negligence under California law. The 9th Circuit affirmed the District court's ruling.

The ruling in Ruiz v. Gap follows a recent trend in personal privacy lawsuits where the parties responsible for breaches of personal information are not subject to private rights of action unless the plaintiffs can prove harm resulted from the breach. It should be noted that the fact that organizations escape potential civil liability in such cases does not mean that they cannot be fined or even criminally prosecuted under state or federal privacy statutes, where such laws exist. A similar dichotomy exists in federal health data breach rules, where liability for individuals and even requirements for organizations suffering breaches to disclose them hinge on the determination of harm due to the breach. Even where organizations assert that no risk of harm to individuals exists, the organizations can still be held liable for violating provisions of the HIPAA Privacy Rule, and even be subject to criminal prosecution if the breaches were the result of willful negligence. As the Ruiz ruling shows, the problem in these cases for individual plaintiffs is not the privacy laws per se, but the tort law requirements for negligence or other causes of action. The legal precedents shown in these cases (and described in detailed case law citations in the Northern District court's order of summary judgment) suggest that privacy regulations and data disclosure laws may not be the best legal avenue for plaintiffs suing Facebook over its privacy practices or in the ever-rising number of lawsuits being filed against Google for the wireless data collection activities it conducted in its Street View program. In the case of Google and Street View, plaintiffs seem to be focusing on the company's alleged violation of federal wiretapping laws, rather than asserting privacy violations or breaches of personal information.

Saturday, June 5, 2010

Alleged health data disclosure via Facebook raises legal and policy issues

Reports of potential breaches of patient privacy at Tri-City Medical Center in Oceanside, California have garnered the HIPAA-related attention you would expect, but are also raising questions about the availability and use of social networking sites from hospitals and other health care facilities. It seems some Tri-City employees posted personal details about patients on Facebook, calling into question the extent to which medical facilities have policies in place about accessing social media and, if access is allowed, about appropriate use to avoid privacy violations under HIPAA. The California Department of Public Health confirmed this week that it has opened its own investigation in to the alleged disclosures; the focus of the state-level inquiry appears to be compliance with or violation of the HIPAA Privacy Rule. This recent incident is far from an isolated occurrence, and as more hospitals move to enact social media policies, the examples set by policies published by major health industry companies like Kaiser Permanente suggest that health care organizations would be wise to err on the side of conservativeness when it comes to patient information. Specifically, while many policy definitions of protected health information that is the focus of HIPAA regulations enumerate specific attributes like name and date of birth, the Privacy Rule applies to all individually identifiable health information (45 CFR §160.103) and specific details about a patient communicated orally or in writing, even without referencing name, may fall under this category. Simply put, this means even a casual conversation by two hospital employees about a patient, if done in earshot of others not involved in the patient's care, likely constitute a HIPAA violation, and the same logic certainly applies to holding such a discussion online.

Friday, June 4, 2010

Oregon complaint against Google Street View amended based on 2008 patent application

As Google continues to accede to demands from several European countries and U.S. courts to turn over copies of data it collected over unsecured wireless networks during its Street View program operations, plaintiffs in a class action lawsuit filed in Oregon are pointing to a 2008 patent application the company filed to challenge Google's assertions that the data collection was unintentional. The application, for "Wireless network-based location approximation," appears to emphasize the intent to fairly precisely determine the location of wireless access points, but the method proposed to make that determination clearly includes capturing packets transmitted from the access points being analyzed. Of course "packet capture" and even "packet analysis" do not necessarily equate to payload inspection, which is where the invasion-of-privacy claims lodged against Google seem to focus, but the patent application makes no distinction about use of different parts of captured packets (e.g. header vs. payload contents), so it there does not seem to be anything to back up Google's publicly stated claims that it was never interested in the payload data. The company's response to the latest allegations involving the patent application was to flatly deny any connection between the method for which the patent was sought and the Street View program.

It may be hard to try to prove intent on the part of Google merely by showing the absence of any explicit statement that clearly says what Google planned to do with data collected through the method it wanted to patent (or more specifically, that says exactly how wireless packets would be analyzed). Reading the patent application text, the prevailing purpose of the claims in the application is to identify the location of wireless transmission points, for the purpose of using that location information to try to provide (i.e., sell) location-based services. It certainly seems possible that location identification and traffic analysis for the purposes stated in the patent application could be performed using information in the packet headers alone (which also might prove viable when analyzing encrypted traffic, depending on the encryption method in use). In hindsight, it might be nice for Google now if its application had said it intended to strip out packet contents and keep only the header data, but prospective patents are rarely constrained to with only describing uses of the innovation that comply with laws or regulations that might be relevant should the technology or method be put into use. It remains to be seen how legally viable the plaintiff's arguments will be about the patent application and what it means for the case, but Google's explanations in the matter so far haven't been very credible (save perhaps CEO Eric Schmidt's simple statement, "We screwed up."). If true, the current explanation offered by the company — which alleges the whole Street View data capture was an inadvertent oversight based on the work of one programmer working part-time on the project, raises its own set of concerns — such as wondering how many "rogue" developers there might be among the employees of a technology giant with a stake in just about every major online business.

Tuesday, June 1, 2010

NIST answers to questions on continuous monitoring suggest no drastic change in approach

In the wake of the release of its updated Special Publication 800-37, Guide for Applying the Risk Management Framework to Federal Information Systems, which among other things calls for federal agencies to continuously monitor the security controls associated with their information systems, the Computer Security Division of the National Institute for Standards and Technology (NIST) today published a set of frequently asked questions (and answers to those questions) on continuous monitoring. In contrast to some initial interpretations of pending changes in the application of federal certification and accreditation processes, this guidance from NIST makes it quite clear that it envisions continuous monitoring as an additional component of security program procedures followed to authorize systems, not as a substitute for them. It would seem that NIST is positioning continuous monitoring simply as an additional, and very valuable, source of information for agencies making risk-based decisions about the security of their information systems. This positioning is consistent with language in a memorandum from OMB distributed to all department and agency heads in April, as is the point made in both documents that by performing continuous monitoring, agencies meet the periodic testing and evaluation requirement under the Federal Information Security Management Act (FISMA).

The consideration of continuous monitoring as an additive element to existing federal information security program practices inevitably raises the question of agency resources needed to comply with expanded obligations. In the context of arguing for the record that conventional certification and accreditation practices are expensive to follow and provide little value in terms of actually securing agency systems and environments, many federal agency officials have questioned the economic wisdom of continuing to authorize their systems using existing methods and approaches. While the Department of State has, so far, continued to conduct security authorization activities and produce accreditation package documentation in parallel with its relatively new automated risk-scoring approach to security posture assessment, other agencies appear to believe that current compliance approaches mandated under FISMA (and OMB Circular A-130) will be deprecated in favor of some other, yet-to-be determined mechanism, whether by executive agency action or by act of Congress. These agencies, notably including NASA, have sought to re-allocate resources away from authorization tasks in favor of standing up continuous monitoring capabilities.

While it may be hard to see continuous monitoring as a negative, even if it does not represent a real shift away from compliance-driven processes, making the newer requirements simply additive, rather than revisionary, seems to be a lost opportunity to pursue real enhancements in agency security postures. The current emphasis for OMB is on moving the government toward more streamlined and more frequent security reporting, via the CyberScope online reporting solution. For its initial rollout, and perhaps until the information being reported is revised significantly, changing the submission mechanism and frequency of reporting doesn't get to the heart of the problem in federal security practices, which is too great a reliance on compliance exercises rather than real situational awareness. If agency CISO's and Congress all agree that compliance with security guidance does not equal actual improved security, then more frequent compliance checks cannot be the answer. The potential remains, depending on how continuous monitoring is implemented among agencies, for agencies to settle on a more appropriate set of security metrics than those currently required for reporting under FISMA. If, however, there is no change in the level of documentation and procedural requirements associated with system authorization, then many agencies may not have the resources within their security programs to make a genuine and sustained effort on continuous monitoring.