Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Tuesday, March 30, 2010

Next law up for revision may be ECPA

Citing the drastic changes in the technological landscape since the law was first passed, a coalition of tech industry heavyweights has launched an effort to persuade Congress to update or revise the Electronic Communications Privacy Act (ECPA). The cooperative effort of the "Digital Due Process" coalition is notable for the inclusion of major privacy advocacy organizations as well, including the American Civil Liberties Union (ACLU), Center for Democracy and Technology (CDT), and the Electronic Frontier Foundation (EFF). The ECPA is among the primary federal wiretapping statutes, as it prohibits interception and disclosure of "wire, or or electronic communications" both during communication activities and in storage. Despite the use of the phrase "electronic communications" it has primarily been used in the context of privacy protections for telephone and email, and one of the goals of the coalition is to extend the sort of protections in ECPA to a wider range of modern technologies, including mobile phones and the Internet. The technology vendors seem primarily interested in both simplifying the language in the law and extending its privacy protections to emerging information access and computing models like cloud computing and mobile devices. Among the primary objectives for the privacy advocates in the group are:
  • Privacy of communications and documents you store in the cloud
  • Protection against secret tracking of your location through mobile devices
  • Strengthen protections against secret monitoring of communications over the telephone or the Internet
  • Limit the amount of data the government can access for investigative purposes unless it is related to a specific criminal suspect
While no government endorsement of the coalition's aims has been made, Sen. Arlen Specter of Pennsylvania called publicly this week for an extension of federal wiretapping laws like ECPA to cover the online photographic and video surveillance, such as the use of webcams. The primary driver behind Specter's statements is the ongoing investigation of the alleged incidents in the Lower Merion (PA) school district where school network administrators remotely activated webcams in laptop computers issued to students and used the cameras to record students without their knowledge or consent (and without probable cause). The motivations are quite different but the message is the same:  a law written nearly 25 years ago before the advent of the Internet cannot effectively be used to regulate communication using current technology unless the law is changed to keep pace with the technology.

Update 1:
It seems that when you get a coalition like this together, people in Washington take notice quickly. In a press release dated March 30, House Representatives John Conyers, Jerrold Nadler, and Robert Scott announced their intention to lead House consideration of reforms to ECPA, working through the Judiciary Committee, which Conyers chairs.

Update 2:
On Friday, April 2, Senate Judiciary Committee Chair Patrick Leahy announced that he will also take up consideration of ECPA in the Senate.

As cloud computing gains momentum, so does government attention to privacy and security

While still marked by more hype than tangible success, cloud computing remains an area widely viewed as inevitable in both commercial and public sector markets. Whether you accept the predictions of cloud service vendors or favor a more pragmatic take to this evolving market, the focus of the discussion has become "when" rather than "if" large-scale use of cloud services and technology will become pervasive. One of the factors reigning in some of the enthusiasm about the cloud is concerns over security and privacy, particularly the protection of data moved to the cloud. Against this backdrop there are calls from government leaders in both the United States and Europe to take proactive action to establish security and privacy requirements for cloud computing, and possibly even enact new legislation. In the U.S., the government-led Cloud Computing Advisory Council (sort of a re-focused IT Infrastructure Line of Business) has developed a cloud computing framework and this week announced its Federal Risk and Authorization Management Program (FedRAMP) that will develop a common set of security requirements in an effort to speed up the pace of adoption of cloud computing by federal agencies. This is just one of the most recent announcements of a slew of workgroups, initiatives, and government and public/private collaborations on cloud computing, which neither separately nor collectively yet cover all aspects of what the government thinks it needs.In Europe, the biggest focus area is data security and privacy, to such a degree that some are now calling for a global data protection law. It remains to be seen whether privacy and security standards and requirements can be harmonized enough to make such an ambitious proposal a reality, but as industry groups such as the Cloud Security Alliance routinely point out, that fact that the government approach to the cloud is as yet unclear — especially what the regulatory environment will look like — neither cloud service providers, technology vendors, or government organizations (or even commercial enterprises) are going to be comfortable moving forward aggressively with cloud computing.

Saturday, March 27, 2010

Better access restrictions needed for medical information

A fair amount of attention is appropriately being focused on the need to maintain appropriate access controls on electronic health record systems and other sources containing personal health information. Among the HIPAA privacy provisions that were strengthened by the Health Information Technology for Clinical and Economic Health (HITECH) Act portion of the Recovery Act is the requirement that covered entities be able to provide an "accounting of disclosures" of personal health information to patients that request one. Prior to HITECH, the rules for recording disclosures included an exception for data disclosures associated with routine uses such as treatment and payment, meaning for instance that a provider didn't have to record the fact a patient's health record was being looked at in order to make a diagnosis or evaluate a treatment option, or to work out reimbursement details with an insurance provider covering the patient's care. HITECH removed these exceptions so that now an accounting of disclosures must include those for all purposes. There remains some concern however that unless comprehensive record logging is used, that instances where a record is accessed (viewed) and merely read, rather than used in some type of transaction, may not be recorded. A big driver for concerns about incomplete tracking of accesses of patient data is the fear that personal information will be viewed by individuals other than the practitioners, billing administrators, or others who have a valid reason for accessing the records. Public opinion polls cited by health privacy advocates suggest that a majority of Americans are not confident that their health records will remain confidential if they are stored online.

What is lost in much of this discussion is that the problem of inappropriate access to personal health information is not only not limited to electronic forms of record keeping, but is just as relevant to paper-based records. BBC News reported this week the results of a British National Health Service (NHS) inquiry made by the privacy and civil liberties advocacy group Big Brother Watch, which suggested that more than 100,000 non-medical staff currently have access to personal medical records stored by the NHS trusts in the U.K. The records involved include those in both paper and electronic form, but the British Department of Health implied in its response to Big Brother Watch claims that the growing use of EHR systems will enable stricter access controls. It is a plausible argument, depending on the record-keeping environment in question, that by digitizing health records and applying access controls to the electronic systems, data can be more protected than if it is kept in paper form. For records maintained in used only in local provider environments, electronic access controls might be preferable to physical security mechanisms used to secure paper records. However, once an electronic records are put online or made available for health information exchange, the population of individuals potentially gaining access to the data in EHRs will far exceed the number of employees and other individuals who might feasibly gain physical access to paper records.

Friday, March 26, 2010

FTC settlement with Dave & Buster's shows broad range of security failures

In a notice published yesterday, the Federal Trade Commission (FTC) announced the terms of a settlement to which entertainment chain Dave & Buster's agreed stemming from FTC charges that the company failed to adequately protect customer credit card information, allowing hackers to compromise the credit card information of over 130,000 customers resulting in hundreds of thousands of dollars in fraudulent charges. The wording of the settlement statement faults Dave & Buster's for its alleged failure to make use of "readily available" security measures to protect its network from unauthorized access or to take "reasonable steps" to secure personal information collected from customers. These charges are the latest in a series of more than two dozen cases involving faulty data security practices, where the administrative complaints lodged by the FTC provide relevant examples of the legal principle of "due care." We touched earlier this week on the concepts of due care and legal defensibility, and FTC actions such as the one against Dave & Buster's follow the nearly 80-year-old federal legal precedent established by the decision in the T.J. Hooper case (60 F.2d 737 (1932)), specifically that failure to use available protective measures translates into legal liability for any damages incurred.

Based on the FTC's allegations and the fact that the compromised data was credit card information, it is entirely likely that Dave & Buster's were also in violation of the Payment Card Industry Data Security Standard (PCI DSS). which includes specific requirements for cardholder data protection which must be followed by merchants accepting credit card transactions. The PCI Security Standards Council maintains the requirements framework for DSS and other security standards, while compliance with and enforcement of the standards is typically handled by payment card industry brands (Visa, MasterCard, Discover, American Express, etc.). Compliance (or the lack thereof) with PCI DSS or other security standards or regulations is outside the scope of FTC jurisdiction, so it remains to be seen if Dave & Buster's will face any further sanctions. Under the terms of the settlement agreement, the company agreed not only to establish and maintain a security program to protect personal information, but also to biennial independent security audits for 10 years to monitor compliance with the settlement.

Thursday, March 25, 2010

Federal information security focus shifting to next-generation FISMA, continuous monitoring

While we have seen perennial efforts in Congress to revise or replace the Federal Information Security Management Act (FISMA) and shift government agencies' security focus off compliance efforts and reporting mountains of paperwork on their information systems, momentum appears to be building in both the legislative and executive branches to define the next generation of federal information security. The common theme surfacing out of all this activity is the government's desire to move to a model of "continuous monitoring" as an improvement over the triannual point-in-time security evaluations that characterize federal agency security programs operating under FISMA.

Last month NIST released the final version of its revised Special Publication 800-37, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach, the latest product of a Joint Task Force initiative coordinated by NIST (representing civilian agencies) and involving the collaboration of the Department of Defense, the intelligence community, and the Committee on National Security Standards (CNSS). The change in title alone is noteworthy (as originally published in 2004, 800-37 was called Guide for the Security Certification and Accreditation of Federal Information Systems), as it is the largely documentation-based C&A process that has lost favor, despite the heavy emphasis on systems accreditation in annual FISMA reporting. One of the fundamental changes in the revised 800-37 is the emphasis on continuous monitoring, which has always been an aspect of the C&A process, but which now includes a dedicated Appendix describing monitoring strategy, selection of security controls for monitoring, and integration of continuous monitoring with security status reporting and overall risk management activities. NIST Computer Security Division director Ron Ross provided an overview of this and other current and planned changes to security guidance and recommended practices at a meeting on March 22 of the ACT-IAC Information Security and Privacy SIG.

For its part, OMB released the FY2009 FISMA Report to Congress,  which provides the customary annual summary of federal agencies' aggregate progress in cybersecurity, security incidents, security metrics, and privacy performance. The forward-looking section of the report spotlights plans to implement new security metrics for 2010 intended to provide real-time indications of performance and to improve situational awareness among agencies. OMB is also focusing on several key administration initiatives with an eye to their impact on security, including transparency and Open Government, health IT, and cloud computing. Federal CIO Vivek Kundra highlighted the same theme of shifting emphasis under FISMA towards continuous monitoring in a radio interview this week, and reiterated his key points while testifying before the House Committee on Oversight and Government Reform's Subcommittee on Government Management, Organization and Procurement at a March 24 hearing on "Federal Information Security: Current Challenges and Future Policy Considerations." Others testifying included State Department CISO John Streufert, whose approach to security management beyond the requirements in FISMA is regularly held up as an example of where government agencies need to go, and several individuals who have been active in the development of the Consensus Audit Guidelines (CAG) and its 20 Critical Security Controls. The general consensus at the hearing seems to be that current government security laws are insufficient, and that FISMA in particular is due for revision.

Separately, both the House and Senate moved forward with draft information security legislation. The revised version of the Senate's Cybersecurity Act of 2010 (S.773) was unanimously approved by the Senate Commerce, Science and Transportation Committee on Wednesday, while in the House, Rep. Diane Watson of California introduced the Federal Information Security Amendments Act of 2010 (H.R. 4900). The agency responsibilities enumerated in the House bill lead with continuous monitoring, penetration testing, and risk-based vulnerability mitigation, as part of information security programs that would be overseen and approved by the Director of the National Office for Cyberspace — a position created through another provision in the bill that would be a Presidential appointee subject to Senate confirmation.

Wednesday, March 24, 2010

How much security is enough and, is the answer the same in a courtroom?

One of the recurring questions in information security management is how much security is "enough"? For organizations that have adopted risk-based approaches to information assurance, the level of security protection they put in place is directly correlated to the value of the assets the measures are intended to protect, and to the anticipated impact (loss) to the organization if those assets are compromised. That's all well and good from a management perspective, but the right risk-based answer may not be the right legal answer  in the sort of highly publicized data breaches, cyber attacks, and other security events that lead to losses not just by the organizations that suffer these incidents, but also by their customers, partners, or other stakeholders. If an organization suffers a breach that puts its customers at risk, what does the organization have to do to try demonstrate it has appropriate security measures in place, and therefore to minimize exposure to tort liability?

One answer to this question lies in the legal principle of due care (sometimes referred to as "reasonable care"), which is the effort a reasonable party would take to prevent harm, and which is a core tenet of tort law. The classic legal precedent for the standard of due care is the U.S. Appellate Court ruling from 1932 in the T.J. Hooper case, which held the Eastern Transportation Company liable for the loss of cargo being transported on a barge towed by the Hooper (a tugboat), because the crew of the Hooper failed to use a radio receiver that would have allowed them to hear locally broadcasted weather reports that warned of unfavorable condition. The court ruled that the loss "was a direct consequence" of the failure to use available safety technology, even thought at the time the use of such radios was far from pervasive. Bringing this precedent forward to the modern computing age, the standard of due care means that if an organization suffers a loss, and the means are available to have prevented the loss, the organization can be held liable for the loss due to its failure to use the available protective measures.

So what's clear from a legal perspective is that organizations have to make appropriate efforts to secure their assets from harm. But once again, how much is sufficient to meet the standard of due care? We have no conclusive answer to this question, but were very pleased to see a discussion of "legal defensibility doctrine" from Ben Tomhave, which nicely integrates the related ideas of legal defensibility, reasonableness of security efforts, and practical acceptance of the inevitability of security incident occurrence. It also picks up on a theme expressed by others  that conventional risk management (at least as commonly practiced) may be insufficient to arrive at appropriate levels of security and therefore leave the organizations more legally vulnerable than they would like to be.

Monday, March 22, 2010

ONC to survey public on attitudes about health information exchange

Providing further evidence that the HHS Office of the National Coordinator (ONC) is increasingly focused not only on addressing personal privacy concerns related to the use of health IT and health information exchange but also on balancing privacy with functionality, ONC announced plans to conduct a large-scale survey of consumers on individual attitudes towards health information exchange and controlling disclosure of their personal health information. In a notice published in the Federal Register, ONC explained that "little is known about individuals' attitudes toward electronic health information exchange and the extent to which they are interested in determining by whom and how their health information is exchanged." In conducting the survey, which once begun is intended to reach more than 25,000 U.S. households, ONC hopes "to better understand individuals' attitudes toward electronic health information exchange and its associated privacy and security aspects as well as inform policy and programmatic objectives."

DHS planning Einstein pilot with commercial ISP

The Department of Homeland Security is apparently ready to move forward with a pilot of capabilities to test its Einstein 3 intrusion detection and prevention system. The plan is to work with a commercial service provider that is a designated Access Provider under the Trusted Internet Connection (the DHS acronym is "TICAP") program to route live network traffic through the Einstein system and validate its technical capabilities as well as the ability to route traffic flows and provide alerts and other appropriate notifications. Given the sensitivity of the program and the well-established privacy concerns over the prospect of the NSA and other government analysts poring over the full content of Internet traffic flowing to or from government networks, DHS conducted a special Privacy Impact Analysis (PIA) just for the pilot program. In the PIA,  DHS lays out the objectives for the pilot "exercise":
  1. The ability of a TICAP to redirect agency-specific Internet traffic through the Exercise technology. 
  2. The ability of US-CERT, utilizing the Exercise technology, to analyze redirected agency-specific traffic to detect cyber threats, and to respond appropriately to those threats.
  3. The ability of US-CERT to develop techniques for supporting future EINSTEIN capabilities.
  4. The ability of US-CERT to potentially share cybersecurity-related information with appropriate organizations in real-time to coordinate the cybersecurity activities of the federal government.
  5. The ability of a TICAP to deliver the traffic back to the particular participating agency in a timely and efficient fashion.
One notable aspect of the network configuration planned for the pilot is that DHS will identify traffic associated with a particular federal agency (using IP addresses allocated to that agency) and re-direct the traffic to a secure monitoring environment where the Einstein system will be installed. Such a configuration effectively pulls the relevant traffic out the service provider's network, performs whatever analysis the system can do, and then puts the traffic back on the network where it presumably can proceed along whatever appropriate route it was heading down in the first place. This is a subtle yet significant deviation from a truly in-line deployment (which might be envisioned for the Einstein system in some future production implementation) and simultaneously allows DHS to focus on traffic for one agency at a time and would seem to minimize the amount of traffic overall passing through the Einstein system. Looking ahead to some possible future scenarios, such a configuration might let DHS and the NSA optimize their detection and prevention operations based on whatever agency is the source or target of the network traffic being analyzed.

    Congressional systems facing cyber attacks on all fronts

    With the tremendous rise in observed security events seen by security administrators for both the Senate and House of Representatives, Congressional leaders are facing the reality that they need to do more to secure systems, data, and computing devices. Senate Sergeant-at-Arms Terrance Gainer provided some insight into the magnitude of the security problem while requesting an increase in his operating budget, including an additional $1 million to strengthen security. The problems stem from the high visibility that Senate and House systems offer to attackers to a general lack of security awareness among Congressional members and staffers alike. The trend of rising security incidents is pervasive across the federal government, with the number of incidents reported to the U.S. Computer Emergency Readiness Team (US-CERT) more than tripling between 2006 and 2008 and providing some counter-evidence to any suggestions that federal information security is improving under FISMA. Still, the rise in security events reported for the Senate was 20,000 percent between 2008 and 2009. Of course, a security "event" is different than an incident, and the vast majority of the activity seen by the Senate is handled by the security measures in put place to provide just that sort of protection. However, even if you accept the premise that legislative data is more interesting or valuable as an attack target, it is hard to fathom that there aren't some fundamental (if unknown) aspects about security of Congressional networks that makes them so attractive.

    Whatever you believe about the effectiveness of federal agency security regulations, guidelines, and standards promulgated by NIST under the authority delegated to it by FISMA, it is at least interesting to note that legislative offices, systems, and data are not subject to any of the obligations imposed on executive agencies by the very laws that Congress has enacted. Even with common standards and guidelines, the specifics of federal information security management practices vary significantly among agencies, not coincidentally because each agency is responsible for making its own risk-based determinations of what threats, vulnerabilities, and risks it faces would result in an impact significant enough to demand mitigation. Congressional systems and security administrators support a group of users (members of Congress and their staffs) as demanding as any in the government, and who have shown reluctance to adopt even basic security measures if they interfere with convenience. It's also hard to imagine another part of the federal government in which active distrust among co-workers is so pervasive, both among members of different parties, across different committees, and even between the two houses of Congress. Given this active threat environment, perhaps those in the legislative branch should follow some of the same advice they've put into words in the legislation they've written, and take a more proactive approach to risk assessment, incident response, and evaluation of the effectiveness of security controls.

    Saturday, March 20, 2010

    Addressing privacy is a top priority for health IT, but should it trump improving care?

    The HHS Office of the National Coordinator (ONC) seems to be putting privacy protections (along with security) high on its list of priorities as it works to make widespread adoption of health information technology a reality. In a publicly released draft of ONC's updated "Health IT Strategic Framework" privacy and security is one of four major "themes" (the others are meaningful use of health IT, policy and technical infrastructure, and learning health system) characterizing ONC's federal strategy for health IT. ONC puts particularly emphasis on adhering to the privacy principles enumerated in the "Nationwide Privacy and Security Framework for Electronic Exchange of Individually Identifiable Health Information," which it released in December 2008 with the endorsement of then-HHS Secretary Michael Leavitt. In general, this Framework brought forward and augmented the Fair Information Practices contained in a 1973 report from the Department of Health, Education, and Welfare that formed the basis of the Privacy Act of 1974 and the OECD Privacy Principles. The 2008 Framework has eight core principles, which are essentially the same as what OECD specifies, with the addition of principles of individual access and correction.

    From a personal privacy standpoint, it's hard not to see the implied priority from ONC as a positive development, but given the ambitious goals for health information exchange the government has had since 2004 and re-emphasized in the HITECH Act, some serious balancing among priorities is likely to be needed. The Strategic Planning workgroup of the Health IT Policy Committee has taken up this debate with specific attention to realizing the goal of using health IT to "transform the current health care delivery system into a high performance learning system" in which greater access to information may improve the delivery and quality of health care. While protecting individual rights like patient privacy and honoring consumer preferences is seen as a prerequisite for gaining acceptance of electronic medical records and data sharing through health information exchange, the workgroup seems to understand that some benefits of greater information sharing may be too compelling to be prevented in the name of guaranteeing privacy. As workgroup member Don Detmer said at the group's March meeting, "We should not force privacy to be more important than health."

    Another point of reference on the relative importance of privacy is the absence of any specific measures, criteria, or standards for privacy in the rules on meaningful use. The healthcare providers, professionals, and organizations eligible to seek the incentive funding to which the meaningful use determination applies are all HIPAA-covered entities, so there is an assumption that these entities’ obligations under the HIPAA Privacy Rule serve to make a separate meaningful use privacy requirement redundant. The language used in the Federal Register publication of the meaningful use Notice of Proposed Rulemaking included a recommendation that providers follow the principles in the Nationwide Privacy and Security Framework, but that direction is advisory, rather than binding. The American Hospital Association, in detailed comments on the proposed rules, objected to references to the Nationwide Privacy and Security Framework principles, primarily because in some instances they exceed what is required of healthcare providers under HIPAA. For others such as the Coalition for Patient Privacy, the lack of explicit privacy requirements for meaningful use is more problematic, particularly the lack of criteria to ensure that individuals (patients) can control the use or disclosure of the information in their electronic health records. The comment period on the meaningful use rules and criteria ended last Monday, so we should know in the next several weeks if any changes are planned with respect to privacy requirements, but the strong emphasis so far on encouraging electronic medical record adoption and enabling exchange of information suggests that to the extent meaningful use incentives are seen as a facilitator of health IT, adding privacy requirements that might constrain the progress sought by ONC seems fairly unlikely.

    Thursday, March 18, 2010

    Whether you value privacy or not, the debate over online privacy is heating up

    In honor of the 10-year anniversary this week of the International Association of Privacy Professionals (IAPP), it seems like a good time to take stock of the state of privacy — in general but especially online — and the active debate over whether privacy matters to people the way it once did. Depending on who you listen to, no one cares about privacy anymore, or privacy has never been a more important concern, and the fight to preserve and extend privacy protections is a significant undertaking. Regardless of where you might land on the continuum bounded by those two opinions, there is a constant struggle going on in many fields and industries right now to find the right balance between protecting privacy and letting organizations conduct their business. Also, it seems that whether or not you have a strong interest in protecting the privacy of your own information and that of others, it seems to be getting harder and harder to do it.

    On the social networking front, now you've got CNET columnist Declan McCullagh adding to the positions espoused by Facebook CEO Mark Zuckerberg and Google CEO Eric Schmidt that the growth and enormous popularity of social networking sites is clear evidence that people just aren't concerned about the privacy of their personal information. McCullagh sites Google's new Buzz service and its quick rise in use as the latest evidence that no one cares about their privacy online, coming as it does in the face of well-publicized default configuration settings that many considered a critical privacy flaw — enough that the company quickly revised the questionable program behavior. The claims from the big tech execs are pretty remarkable given the indications that each of them has personally given that they do at least care about their own privacy, even if they think none of their users have the same feelings. Regardless of the practical realities about how many people read and understand (or just ignore) privacy policies posted by online service providers, at the end of the day, individual users who share lots of personal details online are doing so by choice, so it's not a big logical leap to say the erosion of privacy online is exactly what users want. In an effort to bring more credible opinions (less vested in getting people to share personal information) to the table, McCullagh and others have pointed to the words of federal Circuit Court justice Richard Posner, who said in an interview in 2008 that he thought privacy as a social good is "overrated" and also says that privacy is not "deeply ingrained in human nature." Upon fuller examination, what Posner believes actually appears to be more relevant for the contemporary discussions about the right trade-offs or balancing points between privacy and utility or efficiency or convenience.This issue comes up almost daily in privacy discussions in healthcare and the move towards electronic health records, in the press and other media access to government information, and of course, in social networking.

    Of course, not everyone is drinking the privacy-doesn't-matter Kool-Aid. In a sideways response to his CNET colleage, Chris Matyszczyk first spelled out a lot of the claims from the no-privacy camp, but drew an important distinction between openly publishing trivial information and revealing really personal details, and ultimately concluded that privacy does matter to people, because people value having some things that they keep to themselves, and even if that set of things varies from person to person, they all ascribe value to being able to exert some control over what gets shared and with whom. Privacy advocates have long argued that consumers really are empowered to make their preferences known — essentially to choose not to do business with companies that don't do a good job of protecting privacy — and it may be that companies in more traditional markets than social networking hold different perspectives about what customers expect. The need to honor customer wishes to keep their information private does not seem to be something an online enterprise can do if it hopes to have a successful future. Online movie rental powerhouse Netflix learned its lesson in this regard, after customers sued the company for violating its own privacy policies by allowing the movie rental preferences of some customers to be disclosed. For its part, the Federal Trade Commission seems to be giving fair notice to companies that consumer-focused changes in the way privacy protections are regulated may well be on the way, particularly with the general dissatisfaction with the notice-and-choice framework most companies currently rely on. We'll address in a forthcoming post the outcomes of the FTC's third (and last in the series) roundtable on exploring privacy, held on March 17.

    Government agencies working to train their investigators to leverage data in social media

    The Electronic Frontier Foundation (EFF) has published a set of information detailing some of the ways that U.S. federal agencies collect information from social networking and other online sites in the scope of law enforcement investigations. The documents posted include training materials from the Justice Department explicitly on gathering evidence from social networking sites, preceded by a short memo that gives a little bit of context for the types of social networking behavior that might spark such an investigation. The documents that EFF obtained, through a Freedom of Information Act filing, are noteworthy in part for stipulations that government employees, including those doing the investigating, shouldn't use government computers to access the sites in question. While there may be a number of reasonable investigative justifications for using alternate-channel access, it calls to mind some of the other areas in which the use of government equipment or facilities is prohibited for certain activities (such as political activities covered under the Hatch Act), where government employees are more or less free to conduct these same activities on their own time using their own non-government resources.

    Online investigation methods by law enforcement have received a lot of attention lately, especially in the wake of the publication of the Global Criminal Compliance Handbook leaked from Microsoft, which provides guidance and instructions to law enforcement authorities about the type of personal information Microsoft stores about users of its online services, how long it keeps that information, and how investigators can go about getting it. This was a particularly well publicized example of ways that major companies facilitate criminal investigations; laws exist in many countries requiring service providers in different industries to retain user information and make it available to authorized investigators when asked, and the U.S. government has also expressed an interest in establishing some of these requirements where they don't already exist.

    Tuesday, March 16, 2010

    Efforts to combat illegal music downloads again raise privacy issue over IP addresses

    For quite some time we've been following the development of the legal debate in both the European Community and the United States over whether IP addresses can be considered personally identifiable information, and therefore handled under personal information privacy laws. In general, it seems that the American and European judicial systems are heading in opposite directions on this issue, with publicly stated opinions by both government officials and judges from some European countries that IP addresses should be considered personal information because, at least some of the time, they can be used to identify individual computer users. If we leave aside the legal rules of evidence (on either side of the Atlantic), we don't have to resolve the question of whether an individual who owns a computer can be held accountable for actions traced through an IP address linked to a that computer. The point is that some authorities have held that you can track an individual through an IP address, and in the European regulatory structure, that puts the IP address within the scope of the Data Protection Directive (95/46/EC) and therefore severely restricts organizations from collecting or using this information.

    This topic is back in the European spotlight this week due to a legal case in Ireland, which involves a settlement that a group of record companies worked out with Irish telecommunications leader Eircom in an effort to combat illegal music downloads by Eircom customers. Eircom agreed to identify customers using their IP addresses and disclose those identities to the record companies. The practice has yet to be implemented due to concerns over the potential violation of privacy such a disclosure would constituted under the national and European Community data protection laws, so now the record companies are seeking a legal ruling from the High Court (which has jurisdiction over all civil and criminal matters in Ireland, subordinate in authority only to the Irish Supreme Court) on the data protection issues involved. The issue at hand is no so much whether IP addresses do or do not constitute individually identifiable information — the IP addresses are being used specifically for the purpose of identifying Eircom users by name — but whether the evidence of wrongdoing by users who download music illegally outweighs the privacy protection. To the extent this argument implicitly accepts the personally identifiable nature of IP addresses, it represents another salvo in the European debate over this issue. Less than a month ago a French court ruled that an IP address cannot be used to positively identify an individual, a legal opinion that if applied to the Irish case would make the current request irrelevant, since it would seem to cast doubt on the "evidence" against individuals accused of illegal uploads or downloads.

    Monday, March 15, 2010

    Concerns over privacy, data anonymity, lead Netflix to abandon contest on improving movie recommendations

    In a move reported by the Wall Street Journal online, responding to concerns raised by the Federal Trade Commission and in the wake of a settled lawsuit, online movie rental powerhouse Netflix announced that it is canceling a second planned contest intended to help the company improve its movie recommendations to members. As part of the first contest concluded in 2006, which Netflix credits with improving its recommendation system by 10 percent, Netflix made available a database of member movie ratings, rental dates, and unique subscriber ID numbers, and had promised to add customer demographics such as age, gender, and zip code for the second iteration of the context. The data were supposed to be sufficiently anonymized to protect Netflix member privacy, but University of Texas researchers Arvind Narayanan and Vitaly Shmatikov demonstrated that Netflix customers could be identified by comparing the member ratings in the Netflix-provided datasets with publicly posted ratings such as those on the Internet Movie Database website. Narayanan and Shmatikov published a paper describing the process they used to "re-identify" the anonymized Netflix customers in the datasets. One member, alleging that Netflix had caused her sexual orientation to become known, claimed in a class action lawsuit that Netflix had violated its own privacy policy with respect to guarding customer's personal information. Such a claim (when it has merit) is usually sufficient to get the FTC involved, inasmuch as violations of stated privacy policies can be considered unfair and deceptive trade practices, which are prohibited under Section 5 of the FTC Act. This case has broader implications beyond Netflix of course, contributing as it does evidence in support of the argument that de-anonymization of personal records can be reversed through correlation with third-party data.

    Thursday, March 11, 2010

    Health care entities need clear guidance on analyzing risk for meaningful use

    There is but a single measure related to security and privacy in the "meaningful use" rules that will be used to determine the eligibility of health care providers to qualify for incentive payments for the adoption of electronic health record (EHR) technology. As currently stated in the Notice of Proposed Rulemaking published in the Federal Register in January, to demonstrate eligibility providers must "Conduct or review a security risk analysis per 45 CFR 164.308(a)(1) and implement security updates as necessary." The statutory reference is to a legal requirement originally stated as one of the required administrative safeguards in the HIPAA Security Rule.

    The fact that the privacy and security measure is already an obligation under HIPAA should in theory make this particular measure easy to satisfy for HIPAA-covered entities; the HIPAA Security Rule has been in force since April 2003, and the deadline for entities to fully comply with the rule elapsed in April 2006. Despite this requirement, however, not all healthcare organizations comply:  the results of a 2009 security survey  of 196 senior-level healthcare professionals conducted by the Healthcare Information Management and Systems Society (HIMSS) found that only 74 percent of these organizations actually perform risk analyses, and of those just over  half (55 percent) do so with at least annual frequency.

    If an organization does not conduct risk analyses, or does but is concerned that the process may not be sufficiently to comply with meaningful use, what would be most helpful would be for guidance to be provided on just what is required or what should be covered in a risk analysis. The government tends to direct entities to guidance from NIST—specifically its Special Publication 800-66, An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule—and CMS’ Security Rule Education Paper Series, especially number 6 in the series, Basics of Risk Analysis and Risk Management. Both of these rely heavily on another NIST document, Special Publication 800-30, Risk Management Guide for Information Technology Systems, for the overall process to be followed.

    For those preferring to seek guidance outside the U.S. federal standards, the ISO/IEC 27000 series of international standards covers risk assessment and risk management for information systems, particularly in ISO/IEC 27005, Information Security Risk Management, and the risk assessment section of ISO/IEC 27002, Code of Practice for Information Security Management. Anyone looking to follow any of this guidance on risk management or performing risk analyses should be aware that substantially all of the guidance is written in a way that focuses on risk assessments of individual information systems, not on organizations overall. This limitation is important because the risk analysis requirement under the HIPAA Security Rule is not limited to systems used by covered entities, but instead focuses on the protected health information. Organizations looking for more enterprise-level perspectives on assessing and managing risk can find relevant guidance in ISO 31000, Risk Management—Principles and Guidelines, within major IT governance frameworks such as ISACA’s Risk IT Framework based on COBIT®, or the Risk Management section of the Information Technology Infrastructure Library (ITIL®).

    Wednesday, March 10, 2010

    Recommended reading: clear analytical insights in a cluttered sea

    With all the attention focused on privacy and security these days, any significant development or incident gets tremendous online coverage. This is at one a good thing and a terrible problem. We've noted before the difficulties in sorting through on the sources of information available online, in particular the problems with determining the true state events among conflicting published accounts, and also what can happen when misinformation propagates rapidly leveraging the Internet. A notable recent example of this last issue was the widely circulated rumor of Supreme Court Chief Justice John Roberts imminent resignation, a bit of misinformation apparently originating in a Georgetown University law professor's lecture, ironically on the subject of the reliability of anonymous sources.

    In this environment it is therefore remarkable to find cogent, thoughtful, well-reasoned analysis about a high-profile event, incident, or trend. Today we have two to share, and we have Twitter to thank to bringing them to our attention. First, on the topic of the recent legal ruling in Italy finding three Google executives guilty of violating privacy laws:  the public response to this case has been dominated by sentiments that the ruling represents a grave threat to freedom of expression on the Internet. In stark contrast comes an article from EPIC Executive Director Marc Rotenberg published through the Huffington Post (and brought to our attention by Bruce Schneier) that provides a clear and straightforward legal analysis of the law on which the decision was based, and highlights the logic of the legal arguments by comparing the Italian personal data protection law to the arguments providing the basis for the earliest legal protections of the right to privacy in the U.S. In so doing, Rotenberg not only explains the completely rational legal basis for the ruling, but also shows all the virtual hand-wringing about implications for ISP liability to be largely irrelevant.

    On another front, ever since Google's public disclosure about the attacks against it in China and the speculation and allegations as to whether the attacks were state-sponsored hacking, there has been a marked increase in attention on the concept of the advanced persistent threat (APT). Unfortunately, a lot of the people and organizations now talking about APT either seem to not understand the concept, or to diminish its significance by incorrectly likening it to everyday security breaches, or simply to use the fear, uncertainty, and doubt surrounding this class of threat to market their products and services, whether or not they have any bearing on the problem or its mitigation. Blogger and incident response expert Richard Bejtlich has been particularly vocal on this topic and, especially, incensed at its frequent mischaracterization, and taking to Twitter to criticize or ridicule vendors or purported security experts who perpetuate these misconceptions. Against this backdrop comes a wonderfully accurate assessment of the whole APT issue from Sourcefire's Matt Olney (who Twitters under the handle @kpyke), which came across our feed courtesy of Joel Esler, also of Sourcefire (creators of Snort and other incident detection and prevention tools). Olney's post on the Sourcefire VRT blog is well worth a read.

    Monday, March 8, 2010

    Senate sees exponential rise in computer attacks, might be time to rethink security posture, not just spend more to respond

    In comments justifying a requested $15 million operating budget increase for fiscal 2011, the Senate Sergeant-at-Arms stressed the need to improve computer security in the face of an extraordinary rise in security "events," which reported went from 8 million per month in 2008 to 1.6 billion (yes, billion) per month in 2009, and still climbing. The Senate security operations center apparently sees nearly 14 million attempted attacks or other events every day. Managing the IT security for the Senate's computing and network infrastructure is among the responsibilities of the Sergeant-at-Arms, which also provides a variety of support services to U.S. senators and Senate and committee offices, such as printing, direct mail, audio and video recording studios, and wireless telecommunications services through Verizon, the Senate's preferred provider. With that kind of increase in attack activity directed at your environment, you'd want more resources too, but it might also be a good time to look at your environment to see if there are any architectural or design characteristics that are contributing to the volume of attacks coming in, particularly including the visibility of Senate network infrastructure to outsiders.

    The core computing operations for the Senate Sergeant-at-Arms reside in the Postal Square building in the shadow of Union Station in northeast Washington, DC. From this central location, the Sergeant-at-Arms oversees a wide-area network providing connectivity not only to Senate offices on Capitol Hill, but also to all home-state Senate offices across the country. The computing infrastructure is segregated according to political party, at least since the 2004 incident when Republican Senate staffers allegedly took advantage of the fact that Democratic and Republican files were co-located on the same server to gain unauthorized access to Democratic files. The Senate, like many federal agencies both large and small, does not use network address translation (NAT) and instead assigns IP addresses to its servers from its allocated netblock. Both the primary public-facing Senate web servers (www.senate.gov) and its intranet servers (us.senate.gov) are hosted by the Senate Sergeant-at-Arms, in contrast to the House of Representatives, for example, whose network configuration directs users requesting www.house.gov to edge content servers hosted by Akamai. Even without the use of NAT-ed IP addresses, it is somewhat surprising that the primary IP address for the intranet appears in publicly accessible nameservers, including the sen-dmzp.senate.gov primary nameserver for the senate.gov domain. The simple fact that the intranet server IP address is so publicly accessible makes it far more likely for network probes and attempted intrusions to be launched against the Senate's internal network.

    None of these configuration or network characteristics are new, so they have little explanatory value in getting to the root of the 200-fold increase in a single year in potentially malicious network security activity. It seems likely that the change in administration and, specifically, the change in the political alignment of the Senate coupled with the significance of some of the items it has taken up on its agenda, would serve to heighten its visibility and therefore make the Senate more attractive as a target, whether threats are intended to cause denial of service, disrupt operations, or just call attention to information security weaknesses. In light of the increased demands on security operations personnel, devoting a portion of what amounts to a less than 7 percent budget increase seems unlikely to help the Sergeant-at-Arms really get a handle on its environment. It is possible that by distributing some of the perimeter infrastructure and network computing services more attention could be focused on traffic filtering and intrusion detection and prevention, while also insulating the core support infrastructure for the Senate from potential disruption, data corruption, disclosure, or other loss.

    Saturday, March 6, 2010

    Is the recent focus on the "cyberwar" intended to build support for more government monitoring?

    Homeland Security secretary Janet Napolitano emphasized in her keynote speech at the RSA conference last week the need for greater collaboration between the government and private sector in order to effectively address cybersecurity challenges facing the U.S. In what amounted to an open call for participation by the private sector, Napolitano announced DHS' new National Cybersecurity Awareness Campaign Challenge, an initiative intended to come up with ideas on the best ways to raise security awareness not just among government agencies and private sector organizations, but among the public at large. The reiteration of what has become a consistent theme from administration officials comes amid an intensifying public debate about the state of information security in the U.S. and particularly the country's ability to protect its critical infrastructure from a major cyberattack. In recent days senior officials from both the current and previous administration have taken sides on the issue of America's position in the the "cyberwar." Outspoken former director of national intelligence Michael McConnell's took to the op-ed pages of the Washington Post last weekend to argue both that our country is engaged in a cyberwar, and that we're losing. Current administration cyber czar Howard Schmidt responded during an interview with Wired magazine during the RSA conference, declaring "There is no cyberwar." This debate was sparked to its current level of acrimony in part by the recently conducted Cyber Shock Wave exercise, some observers of which concluded that it exposed significant gaps in preparedness that called into question how effectively the government could respond to a large-scale incident if one occurred.

    Leaving the semantic debate about the "cyberwar" aside, what seems unambiguous is the government's intention to do more to establish and maintain situational awareness of the nation's critical infrastructure. Given how much of that infrastructure is owned and managed in the private sector, there doesn't seem to be a feasible approach to improving overall cybersecurity without the private sector playing an integral role. In this context it also seems non-coincidental that the government is giving public notice of its intention to someday provide comprehensive monitoring of all critical infrastructure, not just government networks. The mechanism for this would presumably be the Einstein program, administered by DHS but operated by the National Security Agency (NSA), which has long alarmed privacy advocates concerned about the prospect of the government potentially reading the personal communications of private citizens. Some in the media are now suggesting that cyber-hand-wringing by McConnell and others is really intended to garner public support for the expansion of telecommunications monitoring programs by the government. Whether or not you find this argument convincing, there is a pretty strong precedent in the form of the USA PATRIOT Act for the government using evidence of weaknesses in the national security posture to greatly extend government authority in the name of national security, at the expense of civil liberties and personal privacy rights.

    Hacking of high school grading system raises key security practice issues

    Although it is one of the top-ranked schools in high-performing Montgomery County, Maryland, in the past few months Winston Churchill High School has been more noteworthy for the alleged hacking by students into the school's grade reporting system, resulting in changes to as many as 54 grades. The investigation into the hacking incident is now a criminal one, and not all the details of the incident have been disclosed, but from what has been reported, several key issues emerge in terms of security practices (or the lack thereof) that may have facilitated the intrusion. These issues at a minimum provide food for thought for other organizations thinking about their own security controls, and they also offer valid points of reference for any organization conducting an assessment of its own computing environment.

    The attack scenario described in published media reports suggests that up to 8 students were involved in first capturing teacher passwords to the grading system with the use of a keylogger or similar program contained on a USB drive attached to a school computer. Once the passwords were obtained, the students were able to gain access to the grading system on multiple occasions and make changes to grades. It seems that the students in question had routine authorized access to the computers used to access the grading system, and there is no mention of whether the grading system can be accessed remotely. Looking at the incident from a defense-in-depth perspective, there appear to have been exploitable vulnerabilities at multiple levels, including at least in the physical, platform, application, and user layers, and possibly the network layer as well.
    • Students had unsupervised physical access to school computers sufficient to allow the placement of the keylogging devices on the computers and, after passwords had been captured, to use the computers to access the grading system and make changes. Given the sensitivity of applications and corresponding data accessible from these computers, physical access should either be monitored more closely if valid reasons exist for students to use the computers, or better yet, access to these computers should be restricted to faculty and administrative staff only.
    • Without knowing what sort of network or system-level monitoring was in place at the school, it is hard to say whether the attachment of the USB drives containing the keylogging program was unrecorded, or recorded but unnoticed, but in either case, the fact that USB drives were permitted to be plugged into school computers without any sort of scanning or verification provided a vital weakness for the hackers. There is a big difference between a USB drive functioning purely as a file storage device and one from which a malicious application is able to run undetected, so assuming disabling the USB ports is not practical due to legitimate uses of USB devices, the use of end-point device monitoring or even closer monitoring of Windows security and event logs would presumably provide technical administrators sufficient visibility into what's happening on the computers to close down this attack vector.
    • The grading system would appear to provide user authentication and authorization based only on usernames and passwords, which may or may not be appropriate given the perceived risk to the school of an intrusion into this system. The use of a keylogger renders moot the question of password strength, although in the wake of the attack school administrators apparently did urge teachers to change their passwords immediately, and to do so again on a regular basis, suggesting that users were not required to change their passwords periodically.
    • On a positive note, it appears the grading system did log all record updates, including tracking which records (and grades within records) were changed and at what time, but unfortunately not by which user. This audit log did give the school some ability to reconstruct the unauthorized changes, although the school had to enlist the help of its teachers, asking each of them to review their grades. It is not clear if any sort of log inspection or alerts are generated from the logs, potentially based on factors such as the number of times a single grade is changed, the time lag between changes (especially for changes after the end of the grading period), or the number of grades changed in a single session for a given user. Automated log analysis of this sort would go a long way towards more quickly identifying suspicious grade changes.
    • Despite the fact that transactions like grade changes are recorded, the unauthorized changes apparently only came to light because a teacher noticed discrepancies in his or her own grades.This seems one of the hardest elements of this story to understand, as it implies that over a period of a semester or longer, individual teachers were not sufficiently detail oriented to recognize grade changes among their rostered classes. It's not a stretch to think that most or all teachers would have some paper-based grading records that are used to support the entry of course grades in the system, so presumably the raw data should exist to help investigators as they examine the grade records of all students.
    • The level of security awareness among users may be somewhat less than it should be at the school. It may be unreasonable to assume that an average user would visually inspect the computer he or she was using, and it's entirely likely that the keylogger-containing USB drive was attached to a port on the back of the machine or other unobtrusive location. Organizational security awareness (or more generally, risk awareness) also seems sub-optimal, based on no other evidence than the permitted student use of faculty computers without supervision.
    • As noted previously, there is nothing in published reports to suggest that the grading system can be accessed remotely, whether over the Internet using a Web-based interface or perhaps after establishing a VPN session or other secure connection to the school's network. Many school districts run centralized computing resources, including administrative systems such as grade reporting and online classroom applications, so network-based access appears likely, and remote access is at least feasible. While the ability to access the system remotely might facilitate student hacking efforts (removing a risk of being caught while misusing a school computer), the use of additional network access credentials (such as a separate username and password for a VPN connection) would provide an additional layer of security for scenarios not involving student use of on-site workstations.
    The most positive aspect of this incident appear to be the simple fact that the unauthorized changes were discovered at all, although there is still some question as to how long the changes had been occurring. Subsequent news reports placed the number of teacher gradebooks involved in the unauthorized changes at 35, far more than originally reported. It may be that the student hackers were victims of their own ambition, and if they had changed fewer grades they might have escaped notice, or at least delayed the discovery of the intrusion.

    Friday, March 5, 2010

    Microsoft working with German government to implement claims-based ID cards

    While promoting the release of its Forefront Identity Manager product set during this week's RSA conference in San Francisco, Microsoft announced its support for a prototype national ID card system in Germany that is designed to allow individual citizens to use a single ID card yet precisely control the personal information disclosed by individuals to the minimum necessary to perform a given function or complete a specific transaction. This is a practical implementation of claims-based identity management principles, which Microsoft (among many others) has been advocating for several years. Even without going to the level of a nationalized identity system, giving users the ability to manage all their identity attributes but limit the disclosure of personal data to just what's needed is a promising approach within specific industry contexts such as healthcare. The U.S. federal government, through agency-specific initiatives as well as the efforts of the Identity, Credential, and Access Management (ICAM) Subcommittee of the Federal CIO Council, is pushing forward with federated identity management following a user-centric approach using open identity, while continuing to try to address some of the key security and privacy challenges associated with this approach.

    Wednesday, March 3, 2010

    Read-only computer security hardware device claims to be hack-proof

    Despite the dismissal-as-foolishness that such claims often bring, security start-up vendor InZero Systems is marketing a sort of hardware proxy device that it claims is hackproof.
    As featured in an article in the March 8, 2010 issue of Business Week, the device operates using read-only memory and operating system execution, yielding no foothold for malware or other invasive threats to succeed. Users place the InZero device between their own computer and the Internet, presenting a protected outward facing interface while passing safe content through to users. The article likens using the device to using a webcam pointed at another computer to insulate yourself from anything malicious that might be out there. The company cites a fairly impressive list of penetration testers and other expert security evaluators, none of which apparently have been able to compromise the device.

    Tuesday, March 2, 2010

    German court overturns anti-terrorism data retention law

    Today the Federal Constitutional Court of Germany struck down a law requiring telecommunications companies to retain individual user data on phone and Internet usage in case it is needed by law enforcement authorities in criminal investigations. The law was created in response to a European Union data retention directive (2006/24/EC), which obligated member states to store telecommunications data on citizens for at least six months, and to make the retained data available to law enforcement or other authorized officials. In its rule, the German court decided that the interest in combating terrorism and protecting national security was outweighed by personal privacy and data protection rights, and concluded the law is unconstitutional. The court's ruling was lauded not just by privacy advocates and the thousands of German citizens who had appealed to have the law overturned, but also by some German government officials, despite the fact that the ruling is a rebuke to a high-profile initiative implemented by the current administration. Peter Schaar, Germany's Commissioner for Freedom of Information and a member of the European Commission's Data Protection working party, noted that despite the intention of the Data Protection Directive, as implemented the German law resulted in keeping "massive amounts of data about German citizens who pose no threat and are not suspects."

    This ruling provides a stark contrast to the efforts by lawmakers and senior justice officials in the previous and current administration to enact laws that would require Internet service providers and other telecommunications companies to retain customer data. Both the House and Senate have drafted versions of the so-called Internet SAFETY Act, which is focused on curbing child exploitation but which requires, among other provisions, that  electronic communication service providers retain user information for at least two years, with the aim of facilitating criminal investigations by law enforcement. When first introduced, the SAFETY Act raised an outcry among both privacy advocates and computer users due to a possible interpretation of the law's definition of "electronic communication service provider" that any home user whose network configuration allowed more than one computer to connect to the Internet might be subject to the data retention requirement. That debate notwithstanding, the issue of customer data retention is now one in which companies like Google, Yahoo!, and Microsoft — all of whom vigorously defend their practices of retaining Internet search data, IP addresses, and other user information — are simultaneously urged to store less personal information about users and for less time by the FTC and other regulators, while Congress and the Justice Department would seem to prefer that they collect and hold even more data for longer periods of time, just in case it could help in a future investigation. Addressing the RSA conference this week, FBI Director Robert Mueller echoed the theme of private sector organizations doing more to cooperate with the government.

    On a somewhat less publicized front, major service providers in the U.S. already have processes and procedures in place designed to assist law enforcement investigations. In the wake of the disclosure of the Google attacks in China in January, security guru Bruce Schneier suggested that the attacks were facilitated by backdoor access to Google's systems that are in place to allow eavesdropping by government officials. Less than two weeks ago, a minor stir erupted when an allegedly leaked "Global Criminal Compliance Handbook" was published online, detailing procedures by which law enforcement could obtain access to data Microsoft retains on the users of its online services, such as Hotmail, MSN, and Windows Live. The document also includes information about the specific data elements that are stored and the retention period for those data. The document was posted online, then withdrawn ostensibly at Microsoft's insistence, then surfaced again, and is now readily accessible to Internet searchers seeking it. Microsoft has noted in its public comments following the disclosure of the document that it has the same obligation as all service providers to support authorized requests for information from law enforcement and to facilitate criminal investigations, so while Microsoft's guidelines may be garnering the most attention at the moment, it seems likely that comparable policies and procedures are in place for most if not all online service providers.

    It's hard to determine system security requirements in the absence of solution architecture

    In the health IT arena, a lot of energy is currently focused on measures, criteria, and standards with which health care providers and other entities can demonstrate "meaningful use" of electronic health record (EHR) systems and thereby qualify for reimbursement and other financial incentives for adopting EHR technology, under a Recovery Act-funded program administered by CMS. In an interim final rule released on December 30 that took effect on February 12, the Office of the National Coordinator, working through the Health IT Standards Committee (an advisory body also created by a provision of the Recovery Act), published a set of functional criteria and associated standards to be used to certify that EHR modules and systems can support meaningful use. As expected, much of the commentary submitted to the Standards Committee related to the security-specific criteria and standards in the IFR focus on establishing the appropriate level of specificity for functional criteria, and on when it makes sense to require the use of specific technical standards. In many ways the consideration of EHR systems in isolation mirrors the information system-centric approach to security favored by the federal government, and to the extent that the criteria in the rule will be used as a product certification checklist, this may be appropriate. However, when considering functional and technical requirements related to the way organizations using EHR technology will exchange information, it is essential to include the environmental context in which the systems operate, in order to assign requirements to the appropriate components in the overall solution.

    As a case in point, at a meeting on February 24 the Privacy and Security Workgroup of the Health IT Standards Committee identified in its comments and recommendations on the IFR what the workgroup calls a "critical gap" in the criteria and standards because they do not address the need to authenticate end points of the secure communication channels that an organization using an EHR system must  use to exchange information with other entities. At first glance it might make sense to require the EHR system to offer this capability, but when considering a typical point-to-point integration architecture between two entities, it's not that likely that the EHR system itself will serve as one of the end points in the transmission. What's far more typical is that any information to be exchanged will be transmitted using an integration gateway, adapter, application server, or even web server depending on the type of information exchange being implemented. For instance, the service specifications for the Nationwide Health Information Network (NHIN), which include the required use of a mutually authenticated secure communication channel, presume the use of intermediary communication components such as the government-produced Connect open-source gateway software, to which internal entity systems would be integrated and which handle functions like establishing TLS sessions with information exchange partners, generating identification and authentication assertions, and applying digital signatures using entity-specific X.509 certificates issued to NHIN participating entities. An internal medical record keeping system in the sort of NHIN-connected scenario envisioned by ONC wouldn't directly connect with any external systems at all, so there wouldn't be a need for the EHR system to be able to establish a secure communication channel on its own. Of course, there are many potential ways to implement health information exchange that don't involve the NHIN, but the point is any required certification criteria that will be used in part to determine eligibility for EHR incentives should match functional capabilities that EHR systems will need in typical implementation scenarios, not in the abstract.

    Monday, March 1, 2010

    French court rules IP addresses are not personal data

    In something of a departure from a trend in some European countries towards considering IP addresses to be personally identifiable information, a French appeals court last week determined that an IP address could not be used to positively identify an individual computer user. The case reached the appellate level due to legal considerations in France whether prior authorization to collect IP addresses was needed from the National Commission for Information Technologies and Civil Liberties, as required before processing personal data under the requirements of the French Data Protection Act. One interesting aspect of the ruling is that both the French appellate court in this case and German, British, and other European data protection officials are considering exactly the same issue but arriving at opposite conclusions. The opinions hinge on the question of whether an IP address can be used to uniquely identify an individual computer user. The French said that it cannot, while in cases in other EU countries authorities have cited circumstances such as the use of static IP addresses assigned by some ISPs to conclude that at least some of the time an IP address can be unequivocally linked to a single person. Even in the case of static IP addresses, there seems to be a big leap between conclusively identifying a computer (the machine) and identifying the users operating the computer. If the intention is to hold the computer owner responsible for all possible actions performed using his or her property, there would seem to be little support for such an approach under current law. With a technically knowledgeable lawyer, you would also expect to see arguments that would question conclusive computer identification, given the feasibility of impersonating media access control MAC identifiers, not to mention IP addresses.

    Italian ruling against Google highlights US - EU divide on privacy

    The recent ruling in an Italian court against three Google executives finding them criminally liable for violated Italian privacy law by allowing a video to be posted on YouTube predecessor Google Video has been widely criticized in the U.S. and abroad not only for the precedent the court is apparently trying to set (of holding service hosting companies liable for the actions of the service's users), but also for the way the ruling appears to run contrary to existing European laws. Regardless of the specific legal wranglings for the case or its pending appeal, the fact that the ruling came down the way it did at all is yet another illustration of the fundamental differences in the way privacy is viewed in European countries as compared to the U.S. As simply and accurately stated by Google's own legal personnel, the crux of the difference is that in Europe privacy is considered a human-dignity right, but in the U.S., it is treated as a consumer-protection right, particularly in the way privacy is legally protected. Privacy is explicitly enumerated in the European Convention on Human Rights, Article 8 of which states "Everyone has the right to respect for his private and family life, his home and his correspondence." There is no such right in the U.S. Constitution, so in American jurisprudence, the idea that privacy is a fundamental right is based on precedents established through a long series of rulings on other matters, that collectively serve to establish a right to privacy.