Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Saturday, March 28, 2009

FISMA provides insufficient foundation for trust

There seems to be an inordinate amount of attention on FISMA in the ongoing debate about how to establish a sufficient trust framework among public and private sector participants in health information exchange. Federal government security executives seem especially focused on the idea, still under development, that there needs to be a way to apply the security and privacy requirements government agencies are held to under FISMA to non-government entities when those non-government entities are part of an information exchange with the federal government. Leaving aside for the moment the suggestion that there may be a more suitable foundation (such as health information privacy regulations) on which to base security and privacy minimally acceptable requirements, there are at least three major problems with using FISMA as the basis of trust among information exchange participants.

The biggest issue is that while many of the security and privacy standards used and guidance followed by federal agencies under FISMA are common references, the provision of "adequate" security and privacy protections is entirely subjective, and as such differs from agency to agency. While all agencies use the security control framework contained in NIST Special Publication 800-53 to identify the sorts of measures they put in place, there are very few requirements about how these controls are actually implemented. Recent annual FISMA reports (including the most recently released report to Congress for fiscal year 2008) highlight the increase in the number and proportion of systems that receive authorization to operate based on formal certification and accreditation. The decision to accredit a system means that the accrediting authority (usually a senior security officer for the agency operating the system) agrees to accept the risk related to putting the system into production. Almost all federal agencies are self-accrediting, and each has its own risk tolerance in terms of what risks it finds acceptable and what it does not. Two agencies might not render the same accreditation decision on the same system implemented in their own environments, even using the same security controls. This lack of consistency regarding what is "secure" or "secure enough" presents an enormous barrier to agreeing on an appropriate minimum set of security provisions that could be used as the basis of trust among health information exchange participants, both within and outside the government.

Perhaps just as troubling, by focusing on FISMA requirements, the government is implicitly de-emphasizing the protection of privacy. To be sure, FISMA addresses privacy, most obviously in the requirement that all accredited systems be analyzed to identify the extent to which they store and make available personally identifiable information. These privacy impact assessments typically result in public notice being given detailing the data stored in and used by any system that handles personally identifiable information. But FISMA does not specify any actions for protecting privacy, nor does its accompanying NIST guidance include any controls to address privacy requirements stemming from the wide variety of legislation and regulatory guidance related to privacy.

It's not entirely clear what it would mean for a non-government organization to try to comply with FISMA requirements. As noted above, most federal agencies are self-accrediting, so presumably the determination of whether a non-government system is adequately secured against risk would rest with the organization itself. The basis for this determination (including the private-sector organization's risk tolerance) might be more or less robust than corresponding decisions made by federal agencies, so simply requiring non-government organization to follow a formal certification and accreditation process cannot establish a minimum security baseline any more than it does within the government. Few outside of government follow NIST 800-53, but many follow the similarly rigorous ISO/IEC 27000 security framework, so these organizations arguably would not need to adopt 800-53 if they already comply with an acceptable security management standard. (NIST has been working on an alignment matrix between 800-53 and ISO 27002, partly as a reflection of the similarity between the two standards and also in an effort to better harmonize public and private sector approaches.)

Even if some agreement can be reached wherein non-governmental entities agree to comply with FISMA security requirements, the law as enacted contains no civil or criminal penalties for failure to comply. Federal agencies judged to be doing a poor job with their information security programs receive poor grades on their annual FISMA report cards (fully half the reporting agencies received a grade of C or below for fiscal year 2007), but there is no correlation between budget allocations and good or bad grades, and no negative impact to poorly performing agencies other than bad publicity.

A better alternative (and one more consistent with master trust agreements like the NHIN Data Use and Reciprocal Sharing Agreement) would use privacy controls as a basis for establishing trust. One challenge in this regard is the number of different privacy regulations that come into play, making the HIPAA Privacy Rule alone (or other major privacy legislation) insufficient. Building a comprehensive set of privacy requirements and corresponding controls to be used as the foundation for trust in health information exchange is a topic we'll continue to address here.

Monday, March 16, 2009

A need for more meaningful security testing

The recently released fiscal year 2008 report to Congress on FISMA implementation once again highlights government-wide progress in meeting certain key objectives for their information systems. Among these is the periodic testing of their security controls, which is required for every system in an agency's FISMA system inventory under one of the "Agency Program" requirements in the law (Pub. L. 107-347 §3544 (5)), and an annual independent evaluation of "the effectiveness of information security policies, procedures, and practices" (Pub. L. 107-347 §3544 (2)(A)) for a representative subset of their information systems. The FY2008 report indicates that testing of security controls was performed for 93 percent of the 10,679 FISMA systems across the federal government, a slight decrease from the 95 percent rate in fiscal 2007, but still reflecting a net increase of 142 systems tested. This sounds pretty good except for one small detail: there is no consistent definition for what it means to "test" security controls, and no prescribed standard under which independent assessments are carried out. With a pervasive emphasis on control-compliance auditing in the government, such as the use of the Federal Information System Control Audit Manual (FISCAM), it seems there remains too much emphasis on verifying that security controls are in place rather than checking to see that they are performing their intended functions.

As the annual debate resurfaces on the effectiveness (or lack thereof) of FISMA in actually increasing the security posture of the federal government, there will presumably be more calls for revision of the law in order to decrease its emphasis on documentation and try to shift attention to making the government more secure. The generally positive tone of the annual FISMA report seems hard to reconcile with the 39 percent growth in year-over-year security incidents reported to US-CERT by federal agencies (18,050 in 2008 vs. 12,986 in 2007). There is certainly an opportunity for security-minded executives to perhaps shift some resources from security paperwork exercises to penetration testing or other meaningful IT audit activities. This would align well with efforts already underway at some agencies to move toward continuous monitoring and assessment of systems and away from the current practice of comprehensive documentation and evaluation only once every three years under federal certification and accreditation guidelines. The lack of sufficient funding is often cited as a reason for not doing more formal internally and externally performed security assessments like pen tests. The current FISMA report suggests that security resources may not be applied appropriately — according to business risk, system sensitivity or criticality, or similar factors — as the rate of security control testing is the same for high and moderate risk impact level systems, and only slightly lower (91 percent vs. 95 percent) for low impact systems. With just under 11 percent of all federal information systems categorized as "high" for security, agencies might sensibly start with those systems as a focus for more rigorous security control testing, and move forward from there.