Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Sunday, November 29, 2009

More options, no resolution on bridging public and private sector security standards

As regularly noted in this space, one of the big points of disagreement in attempts to achieve greater levels of information integration, particularly health information exchanges, is how to reconcile disparate security and privacy standards in place that apply to government agencies and private sector entities (FISMA still being touted as best security for health information exchange; No point in asking private entities to comply with FISMA). The debate has been cast most often as one about where to draw the boundaries where the detailed security control requirements and other obligations to which federal agencies are bound under FISMA. When information exchanges involve data transmission from the government to private entities, the law is only clear in cases where the private entity is storing or managing information on behalf of the government. When the intended use of the data is for the private entity's own purposes (with the permission of the government agency providing the data), the text of the FISMA legislation is pretty clear that the private sector entity is not bound by its requirements, but the agency providing the information still has obligations with respect to the data it sends out, at the time of transmission and after the fact.

At the most recent meeting of the ONC's Health IT Standards Committee on November 19, federal executives including VA Deputy CIO Stephen Warren and CMS Deputy CISO Michael Mellor spoke of the need to beef up federal information systems security protections when those systems will be connected to non-government systems, and again endorsed the position that government security standards under FISMA are more strict than equivalent standards that apply to private sector entities, including those prescribed by HIPAA. In the past year, despite the creation of a government task group formed specifically to address federal security strategies for health information exchange, there has been little in the way of resolution in terms of arriving at a common set of standards that might apply to both public and private sector entities involved in data exchange.

An interesting entrant into this arena is the Health Information Trust Alliance (HITRUST), a consortium of healthcare industry and information technology companies that aims to define a common security framework (CSF) that might serve as the point of agreement for all health information exchange participants. Ambitious, to be sure, but the detail provided in the CSF itself and the assurance process that HITRUST has defined for assessing the security of health information exchange participants and reporting compliance with the framework should serve as at least as a structural model for the security standards and governance still under development for the Nationwide Health Information Network (NHIN). The HITRUST common security framework has yet to achieve significant market penetration, especially in the federal sector, perhaps in part due to the initial fee-based business model adopted by the Alliance for the CSF. In August of this year HITRUST announced that it would make the CSF available at no charge, and launched an online community called HITRUST Central to encourage collaboration on information security and privacy issues in the health IT arena. (In the interest of full disclosure, while SecurityArchitecture.com has no affiliation with HITRUST, some of our people are registered with HITRUST Central.) The point here is not to recommend or endorse the CSF, but simply to highlight that there is a relevant industry initiative focused on some of the very same security issues that are being considered by the Health IT Policy Committee and Health IT Standards Panel.

Tuesday, November 24, 2009

Revised SP800-37 not ideal, but an improvement

NIST has released for public comment a revision to its Special Publication 800-37, "Guide for Applying the Risk Management Framework to Federal Information Systems." This document was formerly the "Guide for the Security Certification and Accreditation of Federal Information Systems," so the first obvious change is in the title and corresponding focus of the publication. The change is most significantly seen is an explicit move away from the tri-annual certification and accreditation process under which federal information systems are authorized to operate, in favor of a continuous monitoring approach that seems to recognize the importance of achieving and maintaining awareness of current security status at any given point in time. While some of the more interesting revised elements may make their way into a future post, of equal interest at the moment is the question of how significant the altered approach in 800-37 may be for improving the security of federal information systems, and more generally of federal agency environments.

As noted by more than one expert (although few as forcefully, bluntly, or eloquently as Richard Bjetlich), continuous monitoring of security controls is a far cry from continuous threat monitoring, the latter of which demands more attention from the government in light of the dramatic rise in reported security incidents over the past three years. Among other things, FISMA has specific requirements that should result in agencies engaging in threat monitoring, such as "periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, of which such testing shall include testing of management, operational, and technical controls of every information system" identified in the agency system inventory required under OMB A-130 (§3544(b)(5)) and "procedures for detecting, reporting, and responding to security incidents" (§3544(b)(7)). Generally speaking, every agency has an incident response team or comparable capability, and threat monitoring using intrusion detection tools is one of several approaches many of the IR teams already implement. So more explicit guidance to agencies (from NIST or anyone else) on doing these things effectively on an enterprise-wide security basis could shore up a lot of the deficiencies that come from a system-level emphasis on controls alone.

Regardless of how all the pending proposals for revising or strengthening FISMA turn out or which ones pass, it's not feasible to suggest that the government should completely abandon its current security practices in favor of a new approach emphasizing field testing of its controls (field testing being one of the ways that agencies could test and evaluate the effectiveness of their security controls). The revised 800-37 has to a least be considered a step in the right direction, because the current triannual documentation exercise does nothing to harden an agency's security posture. A move to continuous monitoring narrows the gaping loophole that current system authorization policy leaves open, and is an explicit step towards achieving situational awareness. There's a long history of ambitious and revolutionary initiatives failing in the federal government, and a corresponding (cynical yet accurate) view that "all successful change is incremental." Let's not characterize the failure on NIST's part to recommend a wholesale replacement of current security program operations to mean that there couldn't or shouldn't be improvements sought within the sub-optimal control-driven model.

That's not the same thing as intrusion detection or prevention, but any effort to mandate those activities had better be well thought out. Putting intrusion detection tools in place will yield no tangible security benefit if the agencies do not also have sufficiently expert security analysts to make sense of the alerts produced by the tools. So simply requiring threat monitoring activities can quickly become another compliance control or the source of a false sense of greater security. Where intrusion detection and prevention is concerned, it's disingenuous to fault individual agencies for not moving to implement continuous threat monitoring when they have no current capability to make sense of the information. IDS or IPS is of no use (and may be counter-productive) without the corresponding experts to analyze the data produced by the tools, tailor detection rules, and tune operations to minimize false positives and separate noise from actual threats.

On the intrusion detection front, the government is moving headlong in this direction, but has no intention of leaving the management of such capabilities up to individual agencies. Under the Einstein program sponsored by DHS and to be run by the National Security Agency, all federal network traffic will be monitored centrally, not only for intrusion detection but also prevention in the form of blocking traffic or disabling network segments when malicious activity is detected. The technical feasibility of monitoring all federal networks is facilitated for Internet connectivity by the Trusted Internet Connections program — under which the entire federal government ostensibly will consolidate Internet points of presence down to fewer than 100 — and by plans under Einstein to place sensors within the physical environments of major providers of telecommunications infrastructure to the federal government.

Sunday, November 22, 2009

Trust in cloud service providers no different than for other outsourced IT

As the private sector embraces outsourced IT services and the federal government apparently eager to follow suit, it should come as no surprise that both proponents and skeptics of IT service outsourcing (now under the new and more exciting moniker of "cloud computing" instead of the more pedestrian "software as a service" or "application service provider") are highlighting positive and negatives examples of the potential for this business model. Security remains a top consideration, particularly when discussing public cloud computing providers, but some of the security incidents brought to light recently actually do more to emphasize the similarity between cloud computing requirements and those associated with conventional IT outsourcing. For any organization to move applications or services or infrastructure out of its own environment and direct control and give that responsibility to some other entity, the organization and the service provider have to establish a sufficient level of trust, which itself encompasses many factors falling under security and privacy. The basis of that trust will vary for different organizations seeking outsourcing services, but the key for service providers will be to ensure that once that level of trust is agreed upon, the provider can deliver on its promises.

To illustrate this simple concept, consider the case last month when T-Mobile users were notified by Microsoft that the Danger data storage service that provided data storage and backup to T-Mobile Sidekick users had failed, with all data lost without hope of recovery. Despite that dire message initially communicated by Microsoft to users, it turned out that the data was recoverable after all, but the incident itself suggests a breakdown in internal data backup procedures — just the sort of thing that would be addressed in the service level agreements negotiated between outsourcing customers and cloud computing providers. While any such SLA would likely have financial or other penalties should the provider fail to deliver the contracted level of service, without confidence that providers can actually do what they say they will, even companies whose customers who are compensated for their losses are unlikely to stick with the providers over time. There was actually some debate as to whether this specific incident was really a failure of cloud computing or not, but the semantic distinction is not important. Organizations considering outsourcing their applications and services need to assess the likelihood that the outsourced service provider can implement and execute on the processes and functions on which the applications depend, at least as reliably as the organizations themselves could if they kept operations in-house.

Even where the risks in question are more specific to the cloud model (such as the cross-over platform attacks to which logically separate or virtual applications may be vulnerable), the key issues are the same as those seen in more conventional environments. The risks of application co-location when there is insufficient exists just as surely in internally managed IT environments as it does in the cloud. A fairly well-publicized example occurred several years ago in the U.S. Senate, when Democratic party-specific documents stored on servers that were supposed to be tightly access controlled instead were available to GOP staffers, the problem was traced to poor configuration on servers used by the Senate Judiciary Committee that were shared by members of both parties. The Senate has since implemented physical separation of computing resources in addition to logical access controls based on committee membership and party affiliation.

These examples highlight the importance of maintaining focus on fundamental security considerations — like server and application configuration, access controls, and administrative services like patching and backup — whether you're running your own applications on your own infrastructure or relying on the cloud or any other outsourcing model.

New research identifies additional risks for applications in the cloud

With great attention continuing to be focused on the potential for cloud computing services to re-shape the way public and private sector organizations manage their IT infrastructure and computing environments, a paper published this month by researchers from MIT and UCSD may provide more good reasons for caution in moving to outsourced services provided by prominent third-party cloud computing vendors like Amazon, Microsoft, and Google. Based on an analysis conducted on Amazon's Elastic Compute Cloud (EC2) but which the authors suggest is generally applicable to other providers, there are a number of vulnerabilities that can be exploited against cloud-hosted apps that run in virtual machines multiplexed on the same physical server. The authors evaluated the provisioning of new virtual machines and identified ways to map the cloud infrastructure so that a theoretical attacker could effectively place an attacking virtual machine instance on the same server as the virtual machine hosting the application the attacker sought to compromise. This sort of "side channel" attack vector might understandably offer a malicious user the opportunity to launch attacks against whatever other applications might be running on the same server, but the research presented in the paper indicates that an attacker looking to compromise a specific service can do so, albeit with the need for more time and money to succeed.

It's important to note that the authors work under the assumption that the cloud computing service provider is trusted. There are known risks such as the compromise of provider staff or attacks directed at hypervisors or other virtual machine administrative tools, but the attack vector on which the paper focuses is feasible even when the integrity of the provider's security environment is maintained. The threat model used in the research the paper summarizes also does not address direct attacks against applications; these threats exist both for cloud-hosted and conventionally hosted applications, and there is no theoretical increase in risk to a network-accessible application that happens to be running on outsourced infrastructure. Instead, as the authors themselves note, the research focuses "on where third-party cloud computing gives attackers novel abilities; implicitly expanding the attack surface of the victim" (Ristenpart, T., Tromer, E., Shacham, H., & Savage, S., 2009; emphasis in the original).

Cloud computing service providers might do well to take note both of the issues presented in the paper and of the recommendations the authors make to mitigate the risks they found. These recommendations include revisions to business and administrative practices as well as technical defensive measures.

Reference:

Ristenpart, T., Tromer, E., Shacham, H., & Savage, S. (2009, November). Hey, you, get off of my cloud: Exploring information leakage in third-party compute clouds. Paper presented at the 16th Association for Computing Machinery Conference on Computer and Communications Security, Chicago, IL. Retrieved from

Friday, November 20, 2009

Health Net breach highlights weaknesses in state-level breach laws

While affected Connecticut residents and authorities are understandably upset about the recently reported loss by regional health plan provider Health Net of personal information on all 446,000 Connecticut customers served by the plan, the six-month delay by the company in making the breach public is seen as especially egregious. Connecticut has had a breach disclosure law on the books since 2006, but the statute does not have an explicit timeframe in which disclosure must occur, instead saying only that "disclosure shall be made without unreasonable delay" (699 Gen. Stat. Conn. §36a-701b). The law also includes a provision by which disclosure is not required if it can be determined that breach is not likely to result in harm to the individuals whose information has been lost, but this exception still requires notification of and consultation with appropriate government authorities to arrive at the determination that no harm will done. It appears that Health Net did not follow the spirit of the law in either context, and given the company's conclusion that the data — contained on a portable disk drive and stored in a format proprietary to an application that Health Net used to access the data — was not encrypted and therefore could probably be read by someone who acquired it.

This incident occurred before the federal data breach disclosure provisions of the HITECH Act went into effect (Connecticut's law is not limited to health information, but includes all personal information), but under those rules Health Net would be subject to federal penalties as well as any punitive action taken at the state level. The health data breach disclosure rules use the same "without unreasonable delay" language found in the Connecticut statute, but add a maximum time of 60 days from the date the breach is discovered (74 Fed. Reg. 42749 (2009)). Of course, the federal rules also include a harm exception like the one Connecticut has, so there are limits to the extent to which federal-level regulations remove subjectivity surrounding data breach disclosures, but the Health Net example serves to highlight the need for specificity in statutes to eliminate some of the room to equivocate that data-losing organizations now have.

Proposed federal P2P ban might extend to personal computers

The latest development in the wake of the unauthorized release of information about a House ethics investigation is newly proposed legislation in the form of what would be called the Secure Federal File Sharing Act (H.R. 4098) that would ban the use of peer-to-peer file sharing software in the federal government. As noted in many articles about this draft legislation, the bill would not only prohibit government employees and contractors from installing or using P2P technology on federally owned computers or those operated on its behalf, it would also set policies constraining the use of file sharing software on non-government computers where home-based remote access or teleworking to federal computers is occurring. This of course is not the first example of government extending policy into employee's homes, but it demonstrates quite clearly the importance government agencies are placing on preventing data loss or disclosure.

Despite the reactive nature of H.R. 4098, there are already federal guidelines in place on the issue of securing computers and other electronic devices used for teleworking or remote access. NIST has two Special Publications on this topic: SP800-114, published in 2007, specifically addresses security of devices used for remote access, and SP800-46, recently revised and updated in June 2009, somewhat more broadly addresses telework security issues. Both of these documents mention peer-to-peer technology as a potential security risk. SP800-114 is probably the most relevant to the new House bill, as it includes specific sections on securing home network environments. What the special publications don't do that the legislation would is establish formal policies (as opposed to recommended practices) related to the use of file sharing software. The challenge with establishing and enforcing security policies for non-government locations like employees' homes is both in making employees aware of what they need to do (and not do) to avoid becoming a vulnerability and in giving them the tools and skills to be able to implement appropriate procedures and controls in their own computing environments.

Wednesday, November 18, 2009

CDT offers a good explanation of user-centric identity issues

The Center for Democracy and Technology (CDT) has a good summary up on their site detailing a variety of policy issues related to user-centric identity management. There is a lot of attention in the market focused on federated identity management in general, and user-centric identity in particular, but as CDT and others point out there are still plenty of important security and privacy considerations to be addressed. This discussion is in the same general vein as the rise of claims-based identity management, which got a boost in 2007 when Microsoft added support for that identity model in their Geneva platform and made it part of the .Net framework. This topic is timely and relevant once again in the health IT context, as the Center for Information Technology at the National Institutes of Health last fall engaged in a pilot project to assess the use of open identity in the federal government. This pilot, one among several launched in coordination with the federal-wide Identity, Credential, and Access Management (ICAM) initiative.

Among the interesting reading available through the CDT site is a recently released white paper that offers a detailed analysis of salient issues with user-centric identity management, focusing on governance and policy issues. Also linked on the CDT page is an ICAM produced document called the Trust Provider Framework Adoption Process which details a process and set of assessment procedures that federal entities can follow to evaluate trust provider frameworks that might be used by third parties seeking to serve as identity providers and credential issuers in support of federated identity management capabilities. The TPFAP is intended to help determine whether credentials issued by such third parties will satisfy the e-authentication requirements established by the government (and described in NIST Special Publication 800-63), at least at E-Authentication Levels 1 and 2, and non-PKI Level 3. The ICAM document provides a lot of useful technical detail on relevant e-authentication requirements, and as a side benefit offers and interesting example of using a technically focused approach to establishing and consistently evaluating trust models represented by different trust provider frameworks.

New OWASP Top 10 RC places injection at the top of the list

The Open Web Application Security Project (OWASP) has published the first release candidate for their "Top Ten Most Critical Application Security Risks," which will supercede the previous version published in 2007. The OWASP Project team made an explicit shift to focusing on risks instead of vulnerabilities that were the focus of previous Top Ten lists, in order to call attention to risks that were likely to have the greatest impact on organizations. As described in a summary presentation separate from the RC file itself, for 2010 "Injection" takes the top position on the list, while "Cross-Site Scripting" drops to the second place from its first position in 2007 (on an interesting side note, the "Unvalidated Input" vulnerability which topped the first OWASP Top Ten list in 2004 is no longer among the issues addressed). Most of the 2007 vulnerabilities remain in some form on the 2010 risk list, with new additions for "Unvalidated Redirects and Forwards" and the re-appearance of "Security Configuration," which was absent from the 2007 list but was part of the 2004 list as "Insecure Configuration Management."

The focus on injection (not just SQL injection, but any interpreter that can be made to execute commands inserted in the data submitted to the application) is a combination of the large number of applications that are still vulnerable to this attack and the severe impact that can result from an exploitation of injection vulnerabilities. The primary mitigation for this problem boils down to input validation, whether by restricting input to stored procedures or encoding input before it is sent to the command interpreter; these are not technical complicated measures, so the prevalence of injection vulnerabilities defies easy explanation.

At first glance, the most surprising deprecation from the 2007 list may be"Information Leakage and Improper Error Handling," given the current market emphasis on data loss prevention, but this vulnerability refers to situations where systems or applications divulge too much information about their configuration, operational characteristics, or other aspects of the application that might yield details that attackers would find useful in compromising the system. What has been brought forward from previous iterations of the Top Ten list is detailed descriptions of the ways the risks are manifested and how the underlying vulnerabilities may be exploited, as well as prescriptive guidance on ways to mitigate each risk, including design-level proactive actions where applicable.

New GAO report and tips from NSA on ways to improve cybersecurity

A new report released yesterday by the Government Accountability Office (GAO) includes a reiteration of existing security issues and weaknesses across the federal government, and includes a dozen recommended actions to improve federal cybersecurity reflecting the results of panel discussions with public and private sector experts. It's an ambitious list, but given the persistent of some of the problems, if the GAO can provide a roadmap that senior policy officials like still to-be-named cybersecurity czar can use to focus attention and direct resources to a set of priorities, there may be an opportunity to make some progress in these areas.
  1. Develop a national strategy that clearly articulates strategic objectives, goals, and priorities.

  2. Establish White House responsibility and accountability for leading and overseeing national cybersecurity policy.

  3. Establish a governance structure for strategy implementation.

  4. Publicize and raise awareness about the seriousness of the cybersecurity problem.

  5. Create an accountable, operational cybersecurity organization.

  6. Focus more actions on prioritizing assets, assessing vulnerabilities, and reducing vulnerabilities than on developing additional plans.

  7. Bolster public-private partnerships through an improved value proposition and use of incentives.

  8. Focus greater attention on addressing the global aspects of cyberspace.

  9. Improve law enforcement efforts to address malicious activities in cyberspace.

  10. Place greater emphasis on cybersecurity research and development, including consideration of how to better coordinate government and private sector efforts.
  11. Increase the cadre of cybersecurity professionals.

  12. Make the federal government a model for cybersecurity, including using its acquisition function to enhance cybersecurity aspects of products and services.

On a timely parallel note this week, NSA Information Assurance Director Richard Schaeffer Jr. testified before the Senate Judiciary Committee's Subcommittee on Terrorism and Homeland Security that if agencies focused security efforts on instituting best practices, standard secure configuration settings, and good network monitoring, those actions alone can guard against the majority of threats and cyberattacks agencies face. This sort of 80/20 rule is not intended to obviate the need for risk assessments or comprehensive implementation of effective security controls in accordance with FISMA and other federal requirements, but the message from NSA seems to be a clear call to agencies to get the basics right.

Friday, November 13, 2009

Lack of readiness to adopt HITECH requirements shouldn't be a show-stopper

There are lots of new and improved privacy and security requirements scheduled to come into effect over the next few months, including enhancements of existing HIPAA security and privacy provisions that were added in the HITECH Act that passed in February as part of the American Recovery and Reinvestment Act. As the time draws near when both HIPAA covered entities and some non-covered entities will need to comply with the new regulations, many indications point to a general lack of readiness by these organizations to be able to meet HITECH's requirements. The results of a survey conducted by the Ponemon Institute and published this week by the survey's sponsor Crowe Horwath found that the vast majority of healthcare organizations surveyed do not think they are ready to comply with the new security and privacy requirements in the HITECH Act. While it should be noted that Crowe Horwath has a business interest in this research as a provider of risk management and compliance consulting services, the near consensus of survey respondents and the troubling lack of available resources in order to try to achieve compliance raise significant questions about the realistic expectations for compliance and enforcement of the new requirements. On a similar theme, the effective date for Massachusetts' sweeping personal information security regulations in 201 CMR 17 has been pushed back twice — first from January 1, 2009 to May 1, and then to January 1, 2010 — in order to give affected organizations more time to understand what was needed to comply and to put appropriate measures in place.

What is less often cited when focusing on efforts to comply with new rules is the extent to which organizations are (or are not) already complying with existing regulations and requirements such as those in the HIPAA privacy rule and security rule. The ability for organizations to reach and maintain compliance has varied greatly with organization size — small organizations tend to have less ability to dedicate staff or financial resources to compliance efforts, or to have personnel with explicit responsibility for information and security privacy. The recent survey indicated that a large majority of organizations do not currently comply with all mandated practices, such as the 79 percent of respondents that do not conduct regular audits or independent assessments of their compliance or of the adequacy of their security and privacy programs.

One way to approach this situation is of course to delay implementation dates. However, it may make more sense to stick to the schedule prescribed in the HITECH Act for when requirements take effect, and adopt an approach to organizational monitoring and compliance enforcement that takes into account the time, resources, and level of effort required to meet the regulations. Current health IT initiatives almost always include phased or incremental rollout strategies, so a similar approach could be followed for security and privacy compliance. One potential benefit from keeping to the original implementation schedule is that as the subset of covered organizations that are ready for HITECH formalize their programs, there should be an opportunity to leverage their example to facilitate less prepared organizations getting to the place they need to be to comply with the law.

European Union going fully opt-in on cookies

In another example of stronger individual-level privacy protections in the European Community compared to those in the United States, the EU Council this week approved a law that requires online users to be asked for explicit permission before a cookie is stored on a user's system. This is a shift from existing EU telecommunications law that requires users to be given a way to opt-out. While the intent of the law is clearly to protect users from a variety of potentially invasive practices such as online behavioral tracking, critics of the provision have rightly suggested that the larger ramifications of the law on economic drivers of Internet operations such as online advertising, as well as impacts on end-user experience if web site visitors are constantly interrupted by prompts seeking their consent to place cookies. Even when ostensibly done to protect users, response to similar security-driven approaches such as the user account control prompts in Windows Vista suggest that this sort of interruption of usability in the name of security is over the line of what is acceptable.

Are skeptics on federal data breach law missing the point?

As noted in this space last week, based on recent activity in the Senate and similar if less immediate legislative proposals in the House of Representatives, it seems possible that Congress will move ahead with enactment of a federal-level data breach disclosure law. Given the patchwork of state-level and domain-specific laws that already exist, there is clearly potential to standardize and perhaps simplify the data breach picture, at least with a minimum threshold, that might in turn translate into the use of more proactive data protection practices and technologies by organization subject to such regulations. In a countering view, CSO magazine conducted its own informal survey as to whether a federal cybersecurity law was the right approach, and saw responses heavily leaning to an answer of "no." Before extending this sentiment to the current efforts on national data breach disclosure standards, however, it would be a good idea to distinguish just how much "cybersecurity" is really in the proposed legislation.

The responses highlighted by CSO show a lot of skepticism about the government's ability to legislate anything that results in better security for those subject to the laws. Without debating the merits of these arguments (there surely are some merits), it might be useful for the survey respondents to remember that the proposed laws aren't primarily intended to increase the level of security measures organizations apply to data and systems to reduce breaches, but instead to require that when breaches occur, those affected by the breaches must be told. Hopefully such a law will provide an incentive to organizations to take steps to avoid breaches, but aside from granting exceptions in cases where lost data has been rendered unusable through encryption or comparable mechanisms, the Congressional bills don't even attempt to mandate any particular security practice or use of technology. The provisions in Leahy's S. 1490 that increase the penalties for identity theft logically can only be seen as an additional disincentive to behavior already prohibited by current law. The absence of technical specificity is a standard feature of security laws, as Congress (quite rightly) doesn't believe it has the expertise to specify technical mechanisms and certainly doesn't want to be in the business of promoting one technology over another.

We read the proposed legislation in the context of greater transparency sought by the current administration on many fronts. Requiring data breach disclosures is a way to make data-holding organizations accountable for their security lapses, and according to the sponsors of the bill is driven largely by concerns over consumer protection issues, rather than a desire to augment data stewardship requirements or strengthen data protection practices. Those who argue that the security realm doesn't need more enforcement mechanisms are presumably working under an assumption that commercial and public sector entities can be trusted to do the right thing, with the very sort of trust model that has defined approaches to complying with FISMA, HIPAA, FERPA, and other major security and privacy laws. These assumptions have more serious implications for organizational security postures than do the prospect of federal-level laws addressing data security and privacy.

Monday, November 9, 2009

Federal health information exchange attention still focused on reconciling security requirements

Another opportunity this week for federal health IT executives working on information exchange to continue to focus attention on the challenge of reconciling different security and privacy laws applicable to federal and non-federal entities. As seen and heard previously, and this week at an Input event, there remains an implicit bias on the part of the feds to assert that being subject to FISMA somehow translates to more rigorous security. The continuity of this theme, plus what seem to be over-simplifications in the content of the article by the usually outstanding and insightful Mary Mosquera, prompted the following reply, submitted online to Government Health IT:
The statement in the article, "SSA does not provide healthcare, so HIPAA regulations do not apply" only addresses one end of the information exchange being described. MedVirgina is absolutely a HIPAA-covered entity, even if SSA is not. This puts different obligations in play for each participant in the exchange, which is the crux of the problem. Those quoted in this article (once again) imply that because FISMA is required for the federal government, government agency security is a stronger constraint (more specific and more detailed, if not more robust or actually "better") than security requirements that apply to non-government entities. This is a false argument. Sankaran's statement that "we can't have the government having to check that all these systems are compliant" is particularly non-sensical. The only FISMA "auditing" that occurs now is internal, as agency inspectors general conduct FISMA compliance reviews of their own agencies. There is no independent audit of agencies for FISMA compliance, and there is also no penalty imposed (other than a bad grade on a scorecard) for agency failure to comply with FISMA requirements.
The scenario described by FHA lead Vish Sankaran where a small medical practitioner would be challenged to comply with all the requirements for security controls that would apply under the law is a red herring too. Small practitioners are already bound by HIPAA as covered entities, just as large hospitals are, and to the extent that these offices use computerized records (the standard industry term of use is ePHI or electronic personal health information), they must already adhere to the requirements of the HIPAA security rule. Sankaran implies that by exchanging data with government agencies, these practitioners would be subject to FISMA, but this is not the way the law works. Non-government entities like contractors are only bound by FISMA if they hold or process data "on behalf of" the federal government; merely storing or using copies of government data does not bring a private health provider under the coverage of FISMA, even if that data is owned by the government. The current situation described in this article, where federal agencies would want to hold private providers to FISMA's requirements, may in fact be what federal health stakeholders want, but it is simply not a requirement under the law.

Saturday, November 7, 2009

More Congressional progress on data breach laws

Thanks to the action of the Senate Judiciary Committee this week, it looks like we have not one but two bills addressing data breach notification requirements that would apply broadly to commercial entities. The measure introduced as the Personal Data and Privacy Security Act (S. 1490) and sponsored by by committee chairman Sen. Patrick Leahy is somewhat broader in scope than the Data Breach Notification Act (S. 139) sponsored by Sen Dianne Feinstein, in that Leahy's bill addresses penalties and enforcement mechanisms for identify theft as well as setting data breach notification requirements. There is a great deal in common between the two bills, so it seems likely (if there is momentum to bring the bills before the fully Senate for deliberation) that they will be combined into a single piece of legislation. Sen. Leahy has been particularly vocal in suggesting that there is growing public demand for a national data breach law, and seems to think the appetite exists in Congress to take up the measure this year or next, despite the fact that similar bills were first introduced four years ago and have never made it through the legislative process to a full vote. Let's not forget that before we can have a law we need action from the House too; in April Rep. Bobby Rush introduced the Data Accountability and Trust Act (H.R. 2221), in essentially the same form as an identically named piece of legislation introduced in the House during the previous Congress. The House bill was considered over the summer by the House Committee on Energy and Commerce's Subcommittee on Commerce, Trade, and Consumer Protection and ordered reported out to the full House at the end of September. So the key question now is, when will one or both sides of Congress take up these bills for consideration and action by the full chambers?

Friday, November 6, 2009

HIMSS survey shows heath IT organizations not ready for security compliance

The results of a survey conducted recently by HIMSS and Symantec and reported out this week suggest that a majority of healthcare organizations are not yet able to comply with security and privacy requirements and standards, including those included in the HITECH Act. Interesting findings include the fact that fewer than half of the 196 health IT professional surveyed work for companies that have a formally designated chief information security officer (federal agencies are required to have such an position under FISMA, but there is no such requirement on private sector organizations), and a similar number do not have plans or capabilities to respond to security incidents if they occur. No less surprising but still of concern is the apparent choice of about a third of organizations represented by survey respondents to implement available security technology such as encryption of data in transit. The use of encryption for stored data is still not widespread, which is probably to be expected given the small percentage of health technology vendors who offer this capability (it is of course available in most modern database management systems, but the applications must be able to work with the encryption features of the DBMS). This particular issue has gained greater visibility since the passage of the HITECH Act and implementation of the personal health data breach notification rules, both of which have provide an exception to disclosure requirements if the data subject to a breach is unreadable, unusable, or otherwise indecipherable — in other words, encrypted.

Thursday, November 5, 2009

National data breach law on the way?

Perhaps taking advantage of the increased attention placed on security and privacy issues, including the implementation of new data breach disclosure rules by both HHS and the FTC applicable to personal health information, Senator Patrick Leahy in July introduced S. 1490 as the Personal Data Privacy and Security Act, which the Judiciary Committee began considering this week. The bill would establish standards for data privacy and security programs to protect personally identifiable information, applicable to any business entity not already subject to Graham-Leach-Bliley or HIPAA that collects, uses, stores, transmits, or disposes of records on 10,000 or more people. Entities that would be covered under this proposed legislation would be obligated to implement data privacy and security safeguards and practices, or risk financial penalties of as much as $5,000 per day while in violation. In terms of data breaches, organizations subject to the proposed legislation would have to "notify any resident of the United States whose sensitive personally identifiable information has been, or is reasonably believed to have been, accessed, or acquired." The language as drafted does provide exemptions from disclosure requirements in certain circumstances, including cases where there is no significant risk of harm to individuals whose personal information was part of the breach. In Leahy's bill however the determination that no significant risk exists is based on the use of encryption or other mechanisms to render the information indecipherable; the technical stipulations do not provide the same subjective "out" contained in the final version of the HHS rules for personal health information breach disclosures. Other provisions in the bill include strengthening of penalties for cases of identity theft and the application of racketeering laws to identity theft, and a requirement that credit reporting agencies receive data breach notifications, in addition to requirements that individuals be notified when their personally identifiable information has been disclosed. The most challenging part of the bill as drafted may be the determination of appropriate safeguards; a similar provision in the HIPAA security rule resulted in the need to develop a formal set of appropriate security controls to deliver the safeguards called for in the law.

Widespread security problems self-reported at Interior

In a sharp departure from the more typical agency-level FISMA self-assessments, the internal FISMA audit by the Inspector General of the Department of the Interior reveals serious systemic problems in DOI's security management, with blame focused on ineffective governance, under-skilled staff, and the failure of bureaus to adhere to departmental and federal-wide guidance. What is interesting about this latest example of poor security program management is that we don't see more reports of this type, as the structural deficiencies cited by the Interior IG are common in other agencies. Among the key problems highlighted was the way Interior's security officers tend to push security responsibility out to regional managers, instead of maintaining central oversight at the CISO level. One IG recommendation was therefore to escalate the reporting relationship of the Department CIO so that the CIO reports directly to the Secretary, rather than the current org structure that puts the CIO under the Assistant Secretary for Policy, Budget, and Management. Having the CIO (and by extension, the CISO, who under FISMA is supposed to report to the CIO) a few layers down in the organization, rather than reporting to the Secretary, is hardly unusual: at agencies such as DHS, State, Treasury and HHS, the CIO reports to an executive responsible for management (the Undersecretary for Management at DHS and State; the Assistant Secretary for Management at Treasury; and the Assistant Secretary for Administration and Management at HHS). By contrast, at both the VA and DOD, the CIO is an Assistant Secretary. Judging by other agencies, it would seem less important to whom the CIO reports, and more important just how much delegation of security responsibility is allowed below the bureau level. Any decentralized or federated department will face security management challenges due to differing risk tolerances (and possibly levels of maturity in applying risk management practices), so without strong top-down guidance and enterprise standards for security, findings such as those seen at DOI aren't very surprising.

Security and privacy going global

Members of Congress show no signs of letting up in efforts to revise or reform or extend various information security regulations. Ideas about updating FISMA — particularly from Senators like Olympia Snowe, John Rockefeller, and Tom Carper — have received a lot of attention this year, as have debates about the appropriate location, role, and reporting structure for whatever individual or position will take top responsibility for federal cybersecurity management and oversight. Now in the House comes the Cybersecurity Coordination and Awareness Act, which among other provisions would assign NIST, already responsible for producing security standards and guidance under FISMA, the task of collaborating with international organizations on security standards. The bill, reported out yesterday by the Technology and Innovation Subcommittee of the House Committee on Science and Technology, might represent a further driver for NIST's ongoing work to compare and align (if not actually harmonize) the NIST Special Publication 800-53 security control framework with the ISO/IEC 27000 series of controls.

International cooperation on security issues seems to be a theme this week. A global conference on data privacy rules convened in Spain this week, attended by hundreds of delegates from different nations, including Homeland Security Secretary Janet Napolitano, who addressed the International Conference of Data Protection and Privacy Commissioners on Wednesday, stressing the importance of information sharing among nations to improve security for all nations and defend against modern global threats.

Tuesday, November 3, 2009

Congress and HHS continue to disagree on health data breach disclosure rules

The new federal health information data breach disclosure rules went into effect in September, but as HHS works on finalizing another set of HIPAA rule changes (this time about penalties for HIPAA violations), Mitch Wagner of Information Week notes that Congress and the administration are still arguing about the subjective "harm" threshold that HHS inserted into the breach disclosure law, as seen in a letter from six Congressmen to HHS Secretary Kathleen Sibelius. This provision gives entities who suffer a data loss or theft the option of not reporting the disclosure, if the entity believes no harm will occur to individuals because of the breach. We're with Congress on this one. Requirements like accounting of disclosures, which apply both to health information under HIPAA and government information like IRS tax records, don't have these sorts of exceptions (HIPAA accounting used to be waived for routine disclosures in the course of treatment or normal business operations, but the HITECH Act changed that and now all disclosures must be recorded). The biggest problem is with the subjectivity (and that fact that the subjective decision is in the hands of the breach sufferer). Is "harm" intended to mean actual financial harm? Identity theft? Embarrassment? Nothing in the rules provides any guidance on this. Perhaps had these rules been in place, the public would not have heard about the UCLA Medical Center staff members who viewed Britney Spears medical records; it would seem they were driven only by celebrity curiosity, rather than a desire to use the information they saw for any particular purpose, so did that cause "harm" to Spears or not, particularly if she didn't know about it? HHS has acknowledged that it chose to deviate from the wording of the law in the HITECH Act and added the no-harm exception in response to multiple comments it received on the draft version of the breach notification rules. It's not hard to imagine the organizations that were the source of these comments, given that the final rule now delegates to HIPAA-covered entities and business associates the responsibility for determining whether a loss of health information is significant or not.

Security quote of the week

Another article focusing on policies and controls to prevent the use of peer-to-peer file sharing technologies in the wake of the Congressional ethics committee report last week contains the best concise statement we've seen in a long time on the problem facing information security programs today. Tom Kellerman of Core Security Technologies is quoted in the NextGov article thusly: "Policy compliance in the absence of a dynamic audit is impossible, [and any] assumption that only insiders can violate policies" is false.

A recurring theme in posts seen in this space is that too often organizations write and communicate well-meaning and appropriate security policies, but then assume that the policies will be followed without implementing any means of enforcement. This problem applies equally to government agencies and private sector organizations, and in some cases is even the result of the sort of risk-based security management approach that organizations should be following. If, however, organizations choose to leave the risk of policy violations un-mitigated, they don't have much credibility when they express shock that an incident occurred contrary to policy.

Monday, November 2, 2009

Congressional breach: balancing security with convenience

Whether or not you believe, as some pundits appear to, that the call for an inquiry into cybersecurity practices in the House of Representatives after the details of an ethics committee inquiry were disclosed is a smoke screen designed to divert attention away from the behavior under investigation, the situation provides a useful illustration of what can happen when user desires for convenience trump security controls. According to numerous reports, the inquiry information was inadvertently disclosed by a staffer who both put sensitive information on a personal computer and also exposed the contents of that computer by running peer-to-peer file sharing software. As you might expect, copying official files to personal computers goes against existing security policy, and while there are presumably no policies governing whether employees choose to install and use P2P on their personal computers, the federal government has long recognized the particular risk posed by P2P technology, to the point that the FISMA report that agencies fill in and submit to OMB includes questions specifically about P2P (both about banning its use within agencies and making sure that security awareness training addresses P2P file sharing).

The general scenario is reminiscent of the aftermath of the well-publicized laptop theft from the home of an employee of the Veterans Administration, who was not using a personal computer, but who had placed VA records with personally identifiable information on his laptop to work on at home, in direct violation of VA security policy. In both of these cases it seems unlikely that the government employees meant any harm through their actions, and were seeking only to extend their government workdays by taking work home with them. This tension between the restrictions or constraints on business practices imposed by security and the demands of information economy workers to have access to their work whenever and from wherever they want it is something security managers have to deal with every day.

Every organization must find the right balance point between appropriate security measures, security policies, and the mechanisms put in place to enforce those policies when voluntary compliance is ineffective. In a few key ways the legislative branch is especially susceptible to erring on the side of employee convenience at the expense of security. While the houses of Congress are sometimes considered cohesive organizational entities, the reality is that just about every member of Congress and committee has their own information technology operations, and for the members in particular, there is a need to conduct business not just in Washington, DC but also from office locations in their home states and districts. This results in what is essentially a wide area network with at least 535 remote locations, from all of which elected officials and their staffs need to be able to conduct their business just as if they were on Capitol Hill. The geographical distribution of computer system users, along with office personnel that vary widely in terms of their security awareness and technical savvy, combine to produce a bias in favor of facilitating work at remote locations (including local storage of sensitive information), rather than imposing security-driven constraints on business operations. The technical means are readily available to help avoid the recurrence of events such as this latest disclosure, but what must first change is the organizational bias in favor of letting workers, no matter how well intentioned, perform actions in the name of convenience or efficiency that put sensitive information assets at risk.