Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Friday, October 30, 2009

New CyberScope is another step in the right direction on federal security

This month the federal government launched a new online FISMA reporting application, CyberScope, based on the Justice Department's Cyber Security Assessment and Management (CSAM) system, which was already offering FISMA reporting services to other agencies through the Information Systems Security Line of Business initiative. As noted in a recent interview with federal CIO Vivek Kundra, the initial intent of CyberScope is to replace the heavy reliance on electronic documents submitted as email attachments with a centrally managed, access controlled repository. Kundra has also noted that he (along with Sen. Tom Carper and many others in Congress) would like to help move agency information security management away from emphasizing compliance and towards continuous monitoring and situational awareness. With any luck the use of online reporting will evolve to make the FISMA reporting process more automated and less onerous for agencies, while the content and emphasis of the FISMA reporting requirements continue to be revised and, hopefully, improved. As long as agencies are still reporting the same information under FISMA requirements, having a better mechanism to support that reporting won't do anything to address FISMA's shortcomings, particularly its failure to address ongoing assessment of the effectiveness of security controls implemented by federal agencies.

Over the past couple of years, NIST has made a renewed push to get federal agencies to apply consistent risk management practices to their information security management decisions. This is a worthwhile goal, but as the authority designated to establish federal agency security standards, NIST itself frustrates efforts to manage security in a risk-based manner by requiring extensive security controls for information systems based not on the specific risk profile of the system, but instead by a broad low-moderate-high security categorization system. The practice of requiring the same set of controls for all systems categorized as "moderate," for instance, suggests that the risks associated with all "moderate" systems is the same. This is a false assumption that violates one of the fundamental principles of information security management, which says that assets like systems and data should be protected to a level commensurate with their value, and should be protected only as long as the assets are still of value. This principle of "adequate protection," while less simple to implement in practice than it sounds in theory, is nonetheless a sensible approach for organizations trying to allocate their security resources in and effective manner. The goal of following system-level risk management practices demands an approach that differs from the "one size fits most" control requirement used in current federal guidance.

Monday, October 19, 2009

Is de-identification of personal records possible?

Last month Harvard Magazine ran a fantastic article on privacy in the current era, focusing in particular on the work of researcher Latanya Sweeney, who has demonstrated a somewhat alarming ability take personal data that has been de-identified in accordance with current technical standards and "re-identify" it through the use of publicly available data sources. Then last week the New York Times reported on two computer scientists at UT-Austin who had great success identifying individuals whose de-identified movie rental records had been provided by Netflix as part of a competition to improve the video rental-by-mail firm's automated recommendation software. Netflix went so far as to deny that it was possible to positively identify anyone in the data it provided, due to measures the company had taken to alter the data, and compared the de-identification measures Netflix used to standards for anonymizing personal health information.

While it may be a bit of a leap to extrapolate the results of the Texas researchers to the health information domain, the privacy advocates appear to have reason for concern. The frequency with which de-identified health record information is made available to industry, government, and research organizations coupled with what seems to be a failure among many governing authorities to understand just how feasible it is to successfully correlate these anonymous records with other available personal information sets seem to be imparting a false sense of security around de-identification in general. As more and more attention is focused on this area of research, it may well be that current standards for de-identification simply cannot provide the sort of privacy protection they are intended to deliver.

Stiffer U.K. penalties coming for personal data misuse

The British Ministry of Justice recently published proposed new penalties for knowingly misusing personal data in violation of section 55 of the Data Protection Act. The proposals raise the maximum penalty to include jail time, in addition to the financial penalty already applied under the law. The reasons cited by the U.K. government for proposing the stronger penalties include the need for a bigger deterrent to those who obtain personal data illegally, and to increase public confidence in the legitimate collection, storage, and use of personal data. (Bear in mind that with a National Health System and other major government programs, the U.K. government maintains centralized data on its citizens in a variety of contexts and purposes, including health records.)

This overseas activity is paralleled to some extent in recent increases in domestic penalties associated with HIPAA violations (codified at 42 USC §1320d) as well as requiring the formal investigation of knowing and willful violations of the law. Along with lack of proactive enforcement measures (as opposed to current voluntary reporting of violations), HIPAA and other U.S. privacy laws are often criticized for having insufficient penalties imposed for violations. There is little movement in the United States to adopt the sort of strong citizen-centered privacy laws in force in the European Community, but it is nonetheless heartening to see risks to personal data taken seriously among major economic powers.

Early potential for national data breach regulation bears watching

Coming on the heels of numerous draft pieces of legislation from the U.S. Senate (including those from Sens. Carper, Snowe, and Rockefeller) is an announcement last week by New York Congresswoman Yvette Clarke that she hopes to begin congressional hearings within the next few months on creating a national law for the protection of private data. Clarke, who chairs the House Homeland Security Subcommittee on Emerging Threats, Cybersecurity and Science and Technology, cites the ever-increasing incidence of identity theft and public demand for action to make both public and private sector organizations more diligent in protecting personal information and in disclosing breaches of that data when they occur.

This idea bears watching, not least to get past the industry segmentation on private data protection and breach notification rules that currently exist, with the clearest regulations applying to health records and financial data, but not without gaps in those contexts either. However, if the final version of HHS rules on disclosure of health data breaches is any guide, any new legislation shouldn't just extend to personal data in uses beyond health and finance, but might also best be crafted to remove some of the subjectivity and compliance discretion that organizations are allowed under existing federal rules, particularly the harm exception to disclosure for organizations suffering breaches of health data.

Security issues at NASA highlight challenges in control effectiveness

A report released this month by GAO on what it views as deficiencies in the information security program and security control effectiveness at the National Aeronautics and Space Administration (NASA) serves to highlight once again the challenge for organizations to move beyond compliance to ensure implemented security controls are actually doing what they are intended to do. Testing and demonstrating the effectiveness of security controls is a persistent challenge for all agencies, not just NASA, and the identified inconsistent and incomplete risk assessment procedures and security policies are also issues shared by many other agencies. What may be most notable about the findings in the report is the relatively basic level of some of the control weaknesses found at some of NASA's facilities, including poorly implemented password-based access controls, non-functional physical security mechanisms, and less than comprehensive vulnerability scanning and intrusion detection.

NASA has had an unusual level of variability in its overall security program, at least as measured through the FISMA reporting process. While the agency has been trending better since fiscal 2006, when it received a D- on the FISMA scorecard, its progress since then has not equaled the level (B-) it achieved in 2005. The details in the most recent (FY2008) report to Congress give some indications of the NASA infosec program as work in progress, with strengths in C&A process, training of security personnel, and privacy compliance, and with gaps in testing of security controls and contingency plans, and in general employee security awareness training. NASA's written response to the GAO report (which, as is typically the practice, was provided to the agency for comment prior to its public release) concurs with all eight of GAO's findings and recommendations, but notes that a number of these recommendations are already being addressed by program improvements underway as the result of internal assessments.

Friday, October 16, 2009

BCBSA data breach another lesson in policy enforcement

Recent news that the Blue Cross Blue Shield Association (BCBSA) suffered the theft of an employee's personal laptop that contained personal information on hundreds of thousands of physicians illustrates once again that it is not enough to have the right security policies in place, you have to be able to monitor compliance and enforce them. In this latest incident, the employee copied corporate data onto a personal laptop, in violation of existing security policy. What's worse is that the data as stored by BCBSA was encrypted, but the employee decrypted the data before copying it. The employee obviously put the BCBSA at risk in a way its policies and database encryption controls were intended to prevent, and with the laptop lost, the American Medical Association is taking action to notify member physicians who may now be at risk of identity theft.

Data stewardship and data handling policies are the first step, and encrypting the data at rest is a good follow-up, but what else can organizations like BCBSA do to avoid this sort of incident? It's not entirely clear how the data might have been transferred from the corporate computing environment to the personal laptop, but whether it was by thumb drive or even direct connection of the laptop to the BCBSA network, there are multiple technical options available to mitigate this type of risk. One answer might be data loss prevention controls that could be used to keep corporate data from being copied or written locally at all, whether the client computer was Association-owned or not. Encryption mechanisms can be added to provide protection in transit and during use, rather than just at rest. USB device access controls can be used to administer, monitor, and enable or disable USB ports when devices are plugged in to them, so for instance any attempt to use a non-approved thumb drive (perhaps one without device-level encryption) could be blocked. Network access control (NAC) can be used to gain awareness of (and prevent if desired) attempts to connect non-corporate computing devices to the network. Let's also not forget the importance of security awareness training, which is just as relevant now as it was for the well-publicized case of the VA employee who had a laptop with veterans' personal data stolen from home after taking the data off-site in violation of VA policy.

Tuesday, October 6, 2009

Need a little more verify to go with that trust

One notable aspect of the widely-reported launch of a Security Metrics Taskforce charged with coming up with new, outcome-based standards for measuring the effectiveness of federal agency information security efforts is a statement written by federal CIO Vivek Kundra, Navy CIO Robert Carey, and Justice CIO Vance Hitch that the group would follow a "trust but verify" approach while also fulfilling statutory requirements and driving towards real-time security awareness. This is consistent with the security posture of many federal agencies, particularly on the civilian side, that in general users and organizations can be trusted to do the right thing in terms of following policies and taking on expected responsibilities and obligations. There are many current examples (HIPAA security and privacy rules, FISMA requirements, etc.), where a major set of requirements has been enacted but no formal monitoring or auditing is put in place to make sure everyone is behaving as they should. Voluntary reporting of violations and requirements with no penalties for failing to comply can only be successful if the assumption holds that you can trust everyone to do the right thing. The new task force would go a long way towards achieving its stated goal of better protecting federal systems if the metrics it proposes include some set of requirements for auditing compliance with the appropriate security controls and management practices. If the recommended metrics do include those aspects, there may even be an opportunity for the government to define penetration testing standards and services that could be implemented across agencies to validate the effective implementation and use of the security mechanisms they select to secure their environments. Focusing on outcome-based metrics that agencies are ultimately left to their own to measure and track, even with real-time situational awareness, will fall short of hardening the federal cybersecurity infrastructure to the point where it is well-positioned to defend against the constantly evolving threats it faces.

Monday, October 5, 2009

Government security looks to address outcomes

In an development that should come as a welcome surprise to security watchers critical of U.S. federal information security efforts as too focused on compliance (at the expense of effectiveness), the Federal CIO Council announced last week that a new task force has been established (it held its first meeting on September 17) and begun work on new metrics for information security that will focus on outcomes. This effort is the latest development in a groundswell of activity both within Congress and parts of the executive branch to revise the requirements under the Federal Information Security Management Act (FISMA) to put less emphasis on compliance with federal security guidance, and more emphasis on results from implementing security controls. Legislation in various forms of development from both the house and the senate would require a similar re-alignment of security measurement approaches, so the action by the CIO Council would seem to be partly in anticipation of such requirements being enacted in law. The collaborative group includes participants from several key agencies as well as the information security and privacy advisory board (ISPAB). The schedule for the group appears quite ambitious: the task force is expected to have a draft set of metrics available for public comment by the end of November.

Friday, October 2, 2009

Latest loss of veteran data teaches more than one lesson

News this week that the personal records of as many as 70 million U.S. veterans were contained on a faulty hard drive sent by the National Archives and Records Administration (NARA) will once again serve to highlight disconnects between security and data privacy policy and practice. The public comments made by NARA officials concerning this incident also highlight some of the issues with current exceptions to data breach disclosure rules put in place by the federal government.

The security problem in this incident is that the media in question was not sanitized as it should have been according to federal and Defense Department policy. NARA had no intention of sending any data out of its custody; it merely wanted the hard drive repaired. NARA officials have defended their actions by saying that the return of hardware media such as disk drives is a routine process, and the fact that unencrypted personal data was on the drives doesn't violate any rules. The situation was brought to light through the actions of an IT manager who reported it to NARA's inspector general. NARA had not disclosed the loss of records to federal authorities (which it is required to do under federal regulations even if it believes no actual breach of personal information has occurred), and also chose not to notify veterans whose records might be affected. The manager who reported the breach and agency officials appear to differ markedly on whether the situation constitutes a breach: one the one hand the manager characterized the loss as "the single largest release of personally identifiable information by the government ever," while the official position stated by the agency is "NARA does not believe that a breach of PII occurred, and therefore does not believe that notification is necessary or appropriate at this time."

The position articulated by NARA calls to mind the "harm" provision in the personal health data breach notification regulations issued by HHS and the FTC that went into effect last week. In a change from the language in the HITECH Act that mandated the regulations, the final version of the HHS rules include an exception to the breach notification requirement if the organization suffering the loss of data believes that no harm will be caused by the loss. (The FTC rules have no such exception.) The self-determination of harm and the incentive organizations would have to minimize the estimate of harm to avoid disclosing breaches has angered privacy advocates and seems likely to result in under-reporting of breaches. The difference between the common sense perspective and the official NARA position on this latest data loss is strong support for the argument that leaving the determination of significance about breaches up to the organization suffering the loss will result in individuals not being notified that their personal information may have been compromised.