Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Tuesday, August 31, 2010

Congressionally legislated privacy may not consider benefits of information sharing

With the addition of yet another privacy bill to the slate of draft legislation pending in Congress, this time in the Senate in the form of the Data Security and Breach Notification Act of 2010 (S. 3742) introduced early this month by Democrats Mark Pryor and John Rockefeller, there clearly remains heightened interest in protecting personal information, even if none of the bills so far has made it very far towards becoming law. While significant attention has been drawn to privacy, especially privacy of information in online contexts, if the current legislation is any indication, federal legislators seem to be emphasizing individual privacy protections at the expense of considering the benefits of information sharing, both to consumers in some settings, and for the success of major initiatives such as health care reform (and data sharing through health information exchange), proposed financial regulatory reform, and ongoing priorities such as anti-terrorism efforts. In an article posted on the Hillicon Valley blog of The Hill, technology publisher Tim O'Reilly expresses concerns that if privacy practices are legislated by Congress, there is a good chance any resulting regulations will err on the side of heavy-handedness, and fail to acknowledge either the benefits to some forms of information disclosure and that fact that many individuals are quite willing to balance privacy against those benefits, particularly if they are afforded some level of control over what personal information is shared and how it is used. In a similar vein, Emory economics professor Paul Rubin offered a list of 10 common misconceptions about privacy in an opinion piece posted by the Wall Street Journal online. In the aggregate Rubin provides an argument for trying to avoid being too restrictive in information disclosure regulations and focusing too much on increasing privacy protections without considering the potential negative impacts of doing so.

Saturday, August 28, 2010

Major cloud computing privacy legal issues remain unresolved

As momentum continues to build for the use of cloud computing services, some significant attention remains justifiably focused on addressing security concerns about the cloud. Valid questions asked about cloud security focus on whether cloud service providers will employ adequate security mechanisms that match or exceed what potential cloud customers might implement in their own environments, and that will satisfy legal requirements for public or private sector entities subject to regulation on security measures. It is against this backdrop that the media and industry point to achievements such as Google's successful certification and accreditation by the General Services Administration for its Google Apps for Government offerings, which offer at least one data point on the nature and extent of security controls a major cloud service provider is using. For organizations that may not be obligated to adhere to specific security provisions but still want to be reassured that cloud services have sufficiently robust protections afforded to them, another area of focus is what approaches to take when contracting for services in the cloud, as eloquently explained by attorney Tanya Forsheit in an article published by the Bureau of National Affairs. The legal analyses by Forsheit and her Information Law Group colleagues have, over the past several months, included a series of posts on various legal issues associated with cloud computing, especially in the area of privacy.

From a legal standpoint, it appears that while many opinions exist on how privacy can be protected in the cloud, who should ultimately be responsible for that protection, and how law enforcement agencies and other government entities should treat cloud environments, there are more unresolved issues than there are settled ones. One significant area that serves as an example of the inability of legislation and jurisprudence to keep up with the rapid pace of technological evolution is the extent to which reasonable expectations of privacy will apply to data stored in the cloud. A large proportion of seemingly relevant jurisprudence has considered privacy protections only in the context of emails, text messages, and other online methods of communication, but no substantial case law exists that addresses general personal information stored in the cloud, which by its nature cannot necessarily be viewed analogously to data stored in file folders on hard drives owned or maintained by the parties to whom the data belongs. One of the more comprehensive treatments of this topic comes in the form of an article published in the Minnesota Law Review last year by David A. Couillard, then a third-year law student, that provides an analysis of privacy expectations in the cloud in the context of Fourth Amendment principles and case law. Couillard's article examines the reasoning applied by various federal courts in determining the reasonableness of privacy expectations associated with personal possessions, computers, and various forms of communication, and concludes with a set of recommendations on how courts might apply Fourth Amendment precedents to cloud computing.

Key legal principles gleaned from precedent rulings applicable to cloud computing environments include the intent by at least some users of cloud services to keep private data that is stored in the cloud (satisfying a requirement for establishing a reasonable expectation of privacy following Katz v. United States), the idea that online environments where information is stored receive legal protection as "virtual containers" (following United States v. Andrea), and the limited impact on reasonable expectations of privacy that occurs simply because information is placed with a third-party intermediary such as a cloud service provider (following reasoning Courts applied in both Katz and D'Andrea). In the year since Couillard's article was published, his opinions with respect to expectations of privacy for information stored with intermediaries have been bolstered by additional rulings, particularly that of the 9th Circuit in Quon v. Arch Wireless, which found under the provisions of the Stored Communications Act (SCA) that a provider of text messaging pager services erred in turning over copies of messages stored on its servers to the City of Ontario (Calif.) police department, even though the department paid for the pager subscriptions of its employees. (The subsequent Supreme Court ruling that reversed the primary finding in Quon did not contradict the 9th Circuit's reasoning with respect to the service provider's actions and the protections afforded by the SCA).

Couillard argued in his article that courts should recognize society's reasonable expectation of privacy in the cloud as they have done previously with respect to other technologies and media of communication. He cites the increasing willingness of people and businesses to put their information in the cloud as evidence that there is some societal expectation that privacy can and will be protected in the cloud, and such societal expectations have been factored in to prior judicial decisions about expectations of privacy as other forms of technology matured and became pervasive. He also recommends that courts consider, as the court did in D'Andrea, online storage environments like web servers equivalent to physical containers when considering their protection from searches, including recognizing concealment mechanisms like passwords and encryption as satisfying individual expectations that privacy will be maintained. Finally, he posits that courts should treat cloud service providers as "virtual landlords" and apply third-party doctrine narrowly to data stored in the cloud.

The amorphous nature of cloud environments raises a challenge to conventional legal procedures such as obtaining search warrants, since the scope of the warrant has to be specified, which in online contexts means the boundary of virtual containers needs to be established. Delineating such boundaries is further complicated by the fact that in networked environments, data need not be uploaded to the cloud to be accessible via the cloud, but clearer legal precedents apply to data stored by businesses or individuals on local computer hardware than they do to data stored online by a third party. These boundaries are potentially least clear when data from multiple parties is collocated in the same storage environment, but courts have previously held different user accounts or even different file folders to be separate "containers" for the purposes of defining search boundaries, and the same sort of reasoning that would allow data belonging to different persons to be treated distinctly, even if it resided on a single hard drive.

With so much of the current privacy and Fourth Amendment debate centers on privacy of electronic communications such as emails (including the storage of those emails after they have been sent and received),  what remains to be seen is how general content stored in the cloud will be treated. The simple analogy applied to things like email communications is the sender and receiver information in an email are much like the destination and return address on an envelope (to which no reasonable expectation of privacy applies) but the contents of the envelope are subject to expectations of privacy, even if no stronger protective mechanism exists than that adhesive seal. The courts' recent distinctions between transactional information and content are not always straightforward to apply in cloud computing contexts, especially given the potential to describe common user interaction with online data sources, such as searches, as transactional exchanges. In addition, because many of the underlying statutes were written at a time when current communications technology did not exist or was not widely used, some aspects of the nature of those technologies are still openly debated. For example, when the Justice Department filed a Section 2703(d) order against Yahoo to get the company to turn over the contents of email messages, the government argued that "previously opened email is not in 'electronic storage'" and therefore did not deserve the protection of the SCA. (This seems to take the email-postal mail analogy to its logical extreme, implying that the greater privacy protections afforded communications contents evaporate once the envelope is opened.) On this point no authoritative ruling will be made, since the Justice Department withdrew its request for the emails, opting not to pursue the matter, perhaps in part due to the strong objections from both online service providers and consumer privacy advocates.

On balance, it seems entirely justified for current or prospective cloud service adopters to harbor concerns about the disposition of their data stored online, not just in the face of threats to data loss, theft, or corruption, but also to keep the data private from searches. Most major online service providers, including Microsoft and Google, have existing policies and procedures in place with respect to making customer data available to law enforcement, at least when presented with a subpoena or other valid legal order, but perhaps more important is understanding whether and under what circumstances warrantless searches of cloud environments might be allowed. For their part, cloud providers could do their customers and prospects a service by making explicity their practices and policies in this area. As the Yahoo scenario shows, such policies may not prevent attempts by government agencies to gain access to data stored in the cloud or other online environments, but they would help cloud users know where their providers stand.

Wednesday, August 25, 2010

Proposed SEC rule on asset-backed securities calls for troubling amount of personal information disclosure

In the continuing aftermath of the financial industry meltdown and the contribution to that failure of insufficient oversight of large portions of the securities markets, the Securities and Exchange Commission has proposed significant changes to its Regulation AB, which provides rules for registration, disclosure, and reporting requirements for asset-backed securities, including mortgage-backed securities issued by entities other than government-sponsored agencies such as Fannie Mae, Ginnie Mae, and Freddie Mac. While the vast majority of mortgage-backed securities are issued through one of these agencies, the increase in data disclosure requirements are mirrored in some provisions applying to assets underlying government-backed securities as well. There have been valid concerns raised over the level of due diligence that goes into the securitization process, particularly given the recent problems with sub-prime lending and lender's willingness to offer mortgages to borrowers with little or no documentation. The proposed rules amending Regulation AB would, among other provisions, greatly increase the amount of asset-specific information that must be disclosed by an issuer in support of their asset-backed securities. In the case of securities backed by residential mortgages, the rules would require 137 discrete pieces of information, most of which relate to individual mortgages rather than groups of mortgages pooled for securitization, and many of which are personal data about mortgage holders. For example, information required to be disclosed about each obligor (person or persons responsible for repaying the mortgage to the issuer) would include:
  1. Credit scores and types of scores.
  2. Wage and other income and a code that describes the level of verification.
  3. A code that describes the level of verification of assets.
  4. Length of employment, with an indication of self-employment and a code that describes the level of verification.
  5. The dollar amount of verified liquid/cash reserves after the closing of the mortgage loan.
  6. The total number of properties owned by the obligor that currently secure mortgages.
  7. The amount of the obligor’s other monthly debt.
  8. Debt to income ratio used by the originator to qualify the loan.
  9. A code that describes the type of payment used to qualify for the loan, such as the payment under the starting interest rate, the first year cap rate, the interest only amount, the fully indexed rate or the minimum payment.
  10. The percentage of down payment from obligor’s own funds other than any gift or borrowed funds.
  11. The number of obligors on the loan.
  12. Any other monthly payment due on the property other than principal and interest.
  13. The number of months since any obligor bankruptcy or foreclosure.
  14. The obligor and co-obligor’s wage income, other income and all income.
This information is in addition details about the loan and data that must be provided about the property itself, including its location, purchase price, appraised value, and other attributes which, even if they are not explicitly attached to a named individual, make individual identification trivial. Numerous privacy advocacy organizations have decried the "unprecedented release of individual-level financial data" that would result should these rules take effect in their currently drafted form.

The intent of the proposed rules is clearly to increase the level and quality of information about the assets underlying asset-backed securities, particularly to provide more visibility into the financial soundness of the individual assets. Given the lessons learned in the past few years about the risks of not conducting more rigorous evaluations of these assets, the desire to improve the transparency of these securitized assets seems entirely appropriate, but so the privacy concerns are equally valid. The information the SEC would require will presumably be available to a wide variety of entities, particularly including investors of all types that might consider buying the asset-backed securities once they are offered. This practical consideration presents the SEC with a significant problem in terms of limiting the disclosure of personal information, presuming it has an interest in doing so.

Saturday, August 21, 2010

Seattle public schools extend off-campus speech policies to online activity

As reported by local Seattle media outlets, the Seattle School Board — with oversight for public schools in a district serving 46,000 students — voted last week to adopt an update to its student Code of Prohibited Conduct, which among other provisions will now apparently apply to student-authored content posted online such as on social networking sites. The intent appears to be to try to prevent students from posting messages or other information about other students or teachers that could result in a disruption to school operations. The newly enacted rules seem to extend those already in force related to off-campus behavior, notably including a provision declaring the "District will respond to off-campus student speech that causes or threatens to cause a substantial disruption on campus or interference with the right of students to be secure and obtain their education." In a Seattle Post-Intelligencer article calling the policy controversial, a school board representative is quoted emphasizing the school board's focus on student safety and the desire by the board to be able to respond to any disruptive behavior. The district's policy defines a substantial disruption as "significant interference with instruction, school operations or school activities, violent physical or verbal altercations between students, or a hostile environment that significantly interferes with a student’s education."

Initial objections to reports of the policy's enactment for the coming school year have unsurprisingly questioned the rules in light of free speech protections under the First Amendment. The language in the school district code of conduct — specifically its use of significant disruption of school activities — would seem to be an explicit and intended reference to legal principles established by the Supreme Court in 1969 in Tinker v. Des Moines Independent Community School District, the foundational judicial precedent covering student expression. In Tinker, the Court ruled that student expression (including speech, although the "speech" in question in the case was actually wearing armbands to protest the Vietnam War) could not be censored unless the speech "materially disrupts classwork or involves substantial disorder or invasion of the rights of others." This broad endorsement of free speech rights on campus served for almost 20 years to protect student speech in many forms, notably including student-authored content in school publications such as student newspapers. In 1988 however, the Supreme Court chose to constrain student free speech rights (or more accurately, to extend school administrative abilities to censor student speech) in Hazelwood School District v. Kuhlmeier, which affirmed the right of school administrators to censor content in student newspapers. A key distinction in Hazelwood is whether the speech appears in a public forum, as opposed to a school-sponsored ones such as school newspapers and yearbooks (and presumably websites). Since sites like Facebook and MySpace are clearly non-school-sponsored and also clearly generally available to the public, school administrators cannot claim the right to censor student speech in these environments. However, to the extent the speech is not just disagreeable to the school district, but might actually be disruptive to school operations or constitute threats, hazing, or other proscribed speech or behavior under existing school policies, administrators would appear to be on solid legal ground if they choose respond to student speech expressed outside of the school environment.

Wednesday, August 18, 2010

Court rules that continuous GPS monitoring infringes on reasonable expectations of privacy

The U.S. Circuit Court for the District of Columbia offered some new judicial insight into reasonable expectations of privacy when it issued a ruling this month overturning the conviction of an alleged drug trafficker because the prosecution used evidence gathered via a global positioning system (GPS) device placed on the man's car and monitored over a month-long period. The police placed the GPS tracking device without first obtaining a warrant, following a legal precedent (from the 1983 Supreme Court decision in  United States v Knotts) that no warrant was required to use such a device to track a suspect on a single journey from an origin to a destination. The lawyers for Antoine Jones, the convicted man in this case, argued successfully that by "tracking his movements 24 hours a day for four weeks with a GPS device they had installed on his Jeep without a valid warrant" the police violated Jones' 4th Amendment protection against unreasonable search. On its face the ruling seems to contradict precedents from at least three other federal courts where warrantless use of a GPS device is concerned, at least in terms of putting a temporal constraint on the duration of the monitoring in question.

The underlying logic of the Knotts decision was that since a trip on public thoroughfares is by definition in plain view of the public, no one could reasonably claim an expectation of privacy applied to the trip, including the route taken or the destination. In contrast, where continuously monitoring was involved with Jones, the DC Circuit panel noted that "the whole of one‘s movements over the course of a month is not actually exposed to the public because the likelihood anyone will observe all those movements is effectively nil."
The prosecution working to convict Jones relied on the aggregate information gathered over numerous trips (correlated with cell phone calls and other intercepted communications that were obtained with the use of warrants) to develop a pattern of Jones' movements that sufficed to convince the jury that he was engaged in cocaine trafficking. The prosecution had no direct evidence on Jones (such as possession of drugs), so the appellate court determined that without the GPS data the prosecution could not have secured a conviction, and therefore reversed it because the evidence was obtained in violation of the 4th Amendment.

Aside from the 4th Amendment implications of the ruling, the decision has raised a number of questions about the applicability of existing laws and judicial precedent to uses of new technology, particularly those that involve geo-location information. A judicial opinion that prolonged monitoring of user movements could constitute a search and, if performed by a government entity, therefore fall under the provisions of the 4th Amendment, open up speculation about implications for social networking applications such as Foursquare,
Twitter, and Facebook that can incorporate user locations based on information associated with the computers or mobile devices used to access them. On a different technological front, new concerns have been raised recently about the use of uniquely identifiable RFID transmitters — such as those used in many late model cars to communicate tire pressure to onboard automotive computers — and the potential for the RFID chips to be used to track user location and movements. The consistent theme in all these instances is the ability of technology to outpace legal, regulatory, and policy provisions about what uses of these technologies are acceptable, and how to guard against technically feasible unintended or surreptitious activities these technologies enable.

Friday, August 6, 2010

Despite emphasis on risk analysis, health IT security won't change much under meaningful use

With all the talk about the need for effective security measures to protect personal health data stored in electronic health records and shared among organizations participating in health information exchanges, the decision of what actual security and privacy controls an organization puts in place remains highly subjective and therefore likely to vary greatly among health care entities. This is neither a new nor a particularly surprising problem in health information security given the structure of the laws and regulations that set requirements for security and privacy provisions, but in some ways the lack of more robust security requirements (and complete absence of privacy requirements) in the administration's final rules on EHR incentives under "meaningful use" represent a lost opportunity. The security-related meaningful use measures and associated standards and certification criteria for EHR systems provide another instance of federal rules promulgated under the authority of the Health Information Technology for Clinical and Economic Health (HITECH) Act that, as implemented, fall somewhat short of the vision articulated in the law.

Where security and privacy laws are concerned, Congress has always shown reluctance to mandate specific security measures or technologies, in part to avoid favoring any particular technology or market sector or vendor, and also because the authors of such legislation correctly assume that they may lack the technical expertise necessary to identify the most appropriate solutions, and instead choose to delegate that task to NIST or other authorities. The net result however is sets of "recommended" or "addressable" security safeguards or, in the case of explicitly required security controls, endorsing a risk-based approach to implementing security that allows organizations to choose not to put some controls in place with appropriate justifications for those decisions. There's nothing inherently wrong with this approach — it embodies fundamental economic principles about security, particularly including the idea that it doesn't make sense to allocate more resources to securing information and systems than what those assets are worth. The problem lies in the reality that different health care organizations will value their information assets in different ways, will face different threats and corresponding risks to those assets, and will have different tolerances for risk that drive what is "acceptable" and what isn't, and similarly drive decisions about what security measures to implement and which to leave out.

From a practical standpoint, what might be helpful to build confidence in the security of health IT such as EHR systems would be a set of minimum standards for security that all organizations would need to implement. The HIPAA Security Rule includes a large number of administrative, physical, and technical safeguards (45 CFR §§164.308, 164.310, and 164.312, respectively), but many of the "required" safeguards are described in sufficient vague terms that compliance is possible with widely varying levels of actual security, and many of the most obviously helpful safeguards, like encryption, are "addressable" and therefore not required at all. There were relatively few security standards and criteria included for meaningful use stage 1, and most of the items that were included already appear somewhere in the HIPAA security rule, but what stands out about the standards and criteria is how little specificity they contain. The minor revisions to these security items in the final rules issued late last month should make it fairly easy for organizations to satisfy the measures, but will have little impact in terms of making EHR systems or the health care organizations that use them more secure. The only identifiable "standards" included are government Federal Information Processing Standards (FIPS) for encryption strength (FIPS 140-2) and for secure hashing (FIPS 180-3), while everything else is described in functional terms that leave the details to the vendor providing the EHR system or the entity doing the implementation. Even the risk analysis requirement (the only explicit security measure in meaningful use) was reduced in scope between the interim and final versions of the rules, as under meaningful use the required risk analysis only needs to address the certified EHR technology the organization implements, not the organization overall. This is markedly less than what is already required of HIPAA-covered entities (and, under HITECH, of business associates as well) under the risk analysis provision of the HIPAA Security Rule.

Wednesday, August 4, 2010

Airline use of personal data on passengers likely not constrained by Privacy Act

A recent article on potentially troubling privacy practices by U.S. airlines posted on The Washington Post online highlights (unfortunately somewhat erroneously) some of the key differences in rules about personal data collection and use that apply to federal agencies versus those that cover commercial organizations like air carriers.  To comply with the information gathering requirements of the Transportation Security Administration's Secure Flight program, the airlines last fall began collecting the date of birth and gender of passengers, in addition to requiring that passenger name information on tickets exactly match the way names are represented on whatever official means of identification the passengers present for airport security screening.  In the Post article, the author speculated after receiving a birthday card from an airline on which he travels frequently that the airline was reusing the data it collected for Secure Flight on behalf of the government for marketing purposes.  In this case, it turns out that the airline had separately requested date of birth information from travelers through its frequent flier program, but the experience still prompted the question of just how the additional personal information being collected by the airlines could, or could not, be used for other purposes.

From a legal standpoint, the key issue is who collected the data from the passenger and for what purpose (and under whose authority) the data was originally collected.  Generally speaking, federal agencies that collect personal information from U.S. citizens or legally resident aliens are required under the terms of the Privacy Act of 1974 to publicize the type of data to be collected and its intended purpose for use, and not to use the data for any other purpose beyond what was stated at the time of collection, unless they first obtain consent from the individuals whose information they hold.  Commercial entities are not subject to the terms of the Privacy Act, unless the data collection they perform is done on behalf of the government.  This means that in the case of Secure Flight, if the airlines only collect the information the TSA requires in order to give it to the government, the data collection falls under the Privacy Act and the airlines could not re-purpose the data for some other use, arguably even for customer service.  However, if (as in the case of Southwest Airlines mentioned in the article) the airline already has the relevant information from passengers, the Privacy Act would not apply and the company would be held accountable only for complying with the terms of its own privacy practices, as regulated by the Federal Trade Commission under the unfair and deceptive practices section of the FTC Act.  For instance, several years ago, when it came to light that several airlines had provided actual passenger data to the government in association with a program developing an anti-terrorist passenger screening system, the actions by contractors working for the TSA, NASA, and other participating agencies were investigated as possible violations of the Privacy Act, but legal complaints (ultimately dismissed) lodged against the airlines who provided the data charged only that they had acted contrary to their own published privacy practices.

The Post online article cites a security industry executive who suggests that irrespective of TSA's information gathering requirements for Secure Flight, the airlines are bound by FISMA, the Privacy Act, and other federal laws. This simply isn't true, as these laws apply only to federal government agencies, and "agency" in these laws is defined to mean only those that are part of executive branch (e.g., Congress is not covered by FISMA).  The actual accountability here depends very much on whether the airline is collecting data for its own purposes or whether it does so on behalf of the TSA or some other government agency.  If the former situation applies, then once the airlines have the data on hand, they are legally permitted to use it in just about any way they wish (including selling it to third parties), although any anticipated possible uses of personal data on passengers should be included in their privacy policies.

Tuesday, August 3, 2010

HHS withdraws final health data breach notification rule for revision

The Department of Health and Human Services (HHS) announced last week that it has withdrawn the final version of its rule on Breach Notification for Unsecured Protected Health Information, which it had submitted to OMB for review in May. HHS gave no specific reason for wanting to reconsider the rule, other than to note the complexity of the issue. The Interim Final Rule for breach notifications that went into effect last September remains in force pending further action on the final rule.

HHS did note that it received over 100 comments during the interim final rule's 60-day comment period last fall, and there is some speculation that the decision to revise the rule again before finalizing it is due in particular to concerns over the provision that would allow entities suffering breaches to make their own subjective determination of whether the breach would result in "harm" to those whose personal data was disclosed. If an entity determined that no harm was likely to result, then the entity need not provide notification of the breach, to HHS or publicly. Shortly after the IFR was published, objections to the harm provision were raised not only by patient privacy advocates but also by members of Congress, and unless the now-withdrawn final rule was amended to strike that provision, it seems likely that addition efforts at either the federal and state level would might have been undertaken to remove this notification exception.

There is an active debate over unauthorized data disclosures and potential or actual harm to the victims of such breaches beyond the health breach disclosure context. Lawsuits filed over breaches of personal information are routinely dismissed when the parties who bring the suits are unable to demonstrate actual harm or injury has occurred, rather than that the potential for harm exists. The legal issue in these cases has little to do with privacy or, generally, with violations of breach notification laws, but with standards of civil procedure and tort liability requirements, which demand that plaintiffs be able to show actual harm in order to bring causes of action for negligence or poor security or data handling practices. Having general or domain-specific breach notification laws on the books should in theory help overcome the negligence right of action issue, but at least in the case of federal health data breaches, that will only be true if organizations responsible for data breaches can't exempt themselves from notifications because they believe (or have no evidence) that the subjects of the breaches suffer actual harm.