Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Monday, December 28, 2009

GAO weighs in on need for consistent data classification

In the wake of the recent release of the Report and Recommendations of the Presidential Task Force on Controlled Unclassified Information, the Government Accountability Office on December 15 released a report on Managing Sensitive Information that addresses many of the same issues raised by the task force. The GAO report focuses specifically on the fact that a multi-agency report containing sensitive-but-unclassified ("SBU") information about U.S. nuclear facilities was published on a publicly available Government Printing Office website. While a number of factors contributed to this inadvertent disclosure, the GAO report highlighted the lack of consistent data classification terminology among different federal agencies involved as a significant problem, and recommended that the agencies working with this information create an interagency agreement regarding the designation, marking, and handling of sensitive information. The presidential memorandum that created the task force on controlled unclassified information (ironically issued just three weeks after the nuclear site information was published) noted some 107 different classification schemes in use among various federal agencies for sensitive-but-unclassified information or its equivalent. In the case of the nuclear facility report, problems with document designation included the use of an international sensitivity designation that has no legal standing in the United States, and the subsequent recommendation that the document be labeled sensitive but unclassified despite the apparent lack of understanding of the meaning and implications of a SBU designation among both executive agencies and legislative offices, leading to what GAO called an incorrect determination that the material could be published. Unfortunately, this incident is just one among many cases of inappropriate disclosure where the problem lies not in malicious intent, but in a lack of awareness and understanding of relevant security policies and the actions needed to follow them.

Sunday, December 27, 2009

3 major 2009 privacy trends to watch next year

As the result of a highly unscientific review of big developments on the privacy front in 2009, here are 3 major trends from the past year that we predict will continue to draw attention in 2010.
  1. Increasing likelihood of a federal law on disclosure of data breaches involving personal information. During 2009 there was significant movement on national data breach notification laws in the 111th Congress, including the Data Accountability and Trust Act in the House of Representatives, and two bills in the Senate voted out of the Judiciary Committee, including the Personal Data Privacy and Security Act. Versions of both of these bills were introduced in previous Congressional sessions, but none progressed as far as these have, making passage of a national data breach law in 2010 a feasible proposition. The enhanced privacy provisions in the HITECH Act may have provided a preview of how this sort of legislation will look, with personal health information breach disclosure rules having gone into effect.
  2. Continuing divergence of privacy protections in the U.S. versus the European Community. While domestic trends included strengthening of privacy protections in some important contexts such as health information, a series of developments abroad served to widen the existing divide between E.U. and U.S. privacy approaches. E.U. additions this year including designation of IP addresses as personally identifiable information, mandatory opt-in for the use of cookies, and stronger penalties in the U.K. for misuse of personal data in violation of Data Protection Action §55. European Community privacy protections have long been viewed as stronger than those in the U.S., due in large part to a fundamentally different philosophy focusing first on the privacy interests of individuals, and defaulting to rules favoring information protection rather than disclosure.
  3. Escalation of privacy concerns as the primary obstacle to achieving widespread information exchange. This issue is most notable in health care, but also surfaced e-commerce, consumer credit markets, and even national security contexts such as terrorism information, where information sharing imperatives may be sufficiently critical to warrant moving ahead without fully addressing security and privacy issues. A tangential trend is the increased awareness of personal privacy control through highly publicized events late in the year such as Facebook's changes in privacy policy and practices and the Supreme Court's decision to hear an appeal of a case involving expectations of privacy in the workplace.

Wednesday, December 23, 2009

New proposed FISMA metrics suggest key technology recommendations

Earlier this month, the National Institute of Standards and Technology issued a request for comments on a draft set of proposed security metrics that OMB is considering using for agencies' annual reporting as required under the Federal Information Security Management Act (FISMA). The comment period runs through January 4, 2010, giving all interested parties, including members of the public, the chance to point out aspects of information security management that OMB and NIST may be overlooking. Taking a quick read through the draft recommended metrics (easy to do since they are presented in bullet-point form in a 22-page slide presentation) can provide a sense of where the government is evolving its thinking on information security, and also gives and indication of some of the technologies and practices that OMB thinks agencies should be adopting, even where formal recommendations (in the form of a memorandum) have not been issued.

In the past, information security reporting by government agencies has focused on historical perspectives produced at relatively infrequent (annual or quarterly) intervals, so one interesting theme in the proposed metrics is the emphasis on real-time reporting capabilities in general, and automated capabilities for achieving situation awareness in particular. OMB proposes asking agencies whether they can provide real-time feeds about system, hardware, and software inventories; external connections including Internet and remote access channels; the number of employees and contractors with log-in credentials, security awareness training, and significant information security responsibilities; and integrated security status and monitoring. In every category, the questions as currently worded allow for the possibility that a given agency does not have real-time or even automated capabilities for reporting the requested information, but in most cases, if no capability exists agencies are asked to provide a date by which they will have such capabilities in place. This language implies a recommendation or expectation that certain practices and technologies be implemented, at least to facilitate reporting (an online reporting tool called CyberScope went live in October). Moving in the direction of continuous monitoring and reporting is a consistent trend from NIST, seen most recently in the revisions to its Special Publication 800-37, which among other things announce an intention to move away from tri-annual certification and accreditation and towards more continuous monitoring of security controls for information systems.

One possible way to interpret some of the questions in OMB's proposal is that agencies may be expected in the near future to acquire and implement more technical measures to help enforce information security policies, regulations, and obligations that already exist. For example, questions under hardware inventory ask about agency abilities to detect and block the introduction of unauthorized hardware to any device on agency networks, and under software inventory similar questions ask about the ability to prevent unauthorized software from being installed on network-connected devices. These capabilities are most often associated with technical security measures such as network access control, end-point security, and monitoring of USB ports and other workstation I/O channels. Many agencies have policies in place forbidding, for instance, the use of USB thumb drives or other removable storage media, but not all have implemented the corresponding technical controls to monitor and enforce compliance with such policies. Similar disconnects between policy and enforcement exist at many agencies where third-party or even personal computers can be connected to government networks. In some cases agencies rely on employee and contractor execution of rules of acceptable use or rules of behavior agreements, rather than technology to monitor network connections, scan clients attempting to connect, and alerting when violations occur. The proposed FISMA reporting questions also ask about use and validation of standard configurations for computing platforms, presumably to determine to what extent agencies are following the Federal Desktop Core Configuration (FDCC) mandated beginning in early 2008 or similar secure configuration guidelines.

In proposed questions about incident detection, the wording may indicate a shift, however subtle, about expectations for agency practices and the need to include those in FISMA reports. For example, in the OMB draft metrics, the language presumes that agencies are conducting controlled network penetration testing. This has always been a requirement under FISMA, but FISMA reporting to date has limited questioning to incident detection tools in use, and has never asked specifically about agency penetration testing. In a format similar to previous FISMA report questions on incident detection and response, the proposed metrics include a category for data leakage protection, asking agencies what technologies (if any) are used to prevent sensitive information from being sent outside agency network environments. Aside from a directive issued in 2006 (OMB Memorandum 06-16) that instructed agencies to encrypt agency data stored on laptops and other mobile devices, no comprehensive guidance or requirement has been issue for federal agencies regarding data leakage protection (or as more common seen in the security market, "data loss prevention"), although it has been a popular topic in government security policy discussions by the Information Security and Privacy Advisory Board (ISPAB) and other bodies debating government information security priorities.

On balance, the new metrics proposed by OMB appear to be a small step forward in reporting information more representative of agency security posture than previous FISMA report requirements, although they are likely to be insufficient to support some of the more significant revisions to FISMA that have been proposed in by Sen. Tom Carper and others in Congress over the past 18 months. To their credit, NIST and OMB do appear to be positioning to leverage relevant government-wide initiatives, such as HSPD-12 credentials and the consolidation of agency external connections under the Trusted Internet Connection program. Practically speaking, the intended benefit from information addressed in the proposed metrics will not be realized until a greater proportion of agencies take action to implement the capabilities and security best practices NIST recommends.

Monday, December 21, 2009

Information sharing imperatives may trump security

While the technical infrastructure required to support information sharing don't really change from context to context, security and privacy requirements applying to senders and receivers of information do vary quite a bit depending on the domain. In the health information exchange arena, these differing requirements and the inability to reconcile them have served to slow participation in health information exchange initiatives such as the Nationwide Health Information Network (NHIN). One of the information sharing solutions often held up as a model for other domains is the federal Information Sharing Environment (ISE), developed as a trusted infrastructure for sharing information about terrorist threats among federal, state, and local intelligence, law enforcement, defense, homeland security, and foreign affairs organizations. Concerns over establishing and maintaining appropriate protections for the data shared in the type of information exchange envisioned for the ISE have resulted in less actual sharing of information than was intended, a problem the administration is now trying to address.

Noting the proliferation of distinct and often incompatible data classification schemes by different organizations possessing relevant information, the administration in May directed an interagency task force to review procedures on classifying sensitive-but-unclassified data and make recommendations on ways to standardize classification guidance to facilitate the exchange of this information. The recommendations were released last week, and emphasized the importance of greater information sharing with such priority that the lack of consistent or comprehensive security controls should not stand in the way of greater levels of information sharing. This finding might seem counter-intuitive at first glance given the sensitivity normally associated with terrorism data, but the recommendation is actually an excellent example of risk-based decision making on security. Simply put, the value of having more of this data available to those needing it to protect the nation from terrorist threats outweighs the risk from the potential disclosure of this information beyond its intended audience. There is certainly an implied expectation that security will continue to be addressed and more robust security controls will be applied to information exchanges as agencies can come to agreement on the technologies and procedures that will be used, but in the mean time, the report determines that the anti-terrorism mission should not be constrained by insufficient sharing of sensitive but unclassified information.

Saturday, December 19, 2009

Supreme Court case to consider limits on workplace privacy

The Supreme Court last week agreed to hear arguments in a case on employee privacy and the extent to which government agencies can monitor the content of personal communications made by their employees while using government-owned equipment. The case involves a police sergeant on the city of Ontario, California SWAT team who routinely used his city-issued pager to send and receive personal messages, many of which were found to be sexual in nature. The case (Quon v. Arch Wireless) is only partly about the appropriateness of the content, or the fact that most of the pager usage by the individual in question was personal, rather than business-oriented. The central issue is whether the city violated a Constitutional right to privacy (under an interpretation of the Fourth Amendment's protection against unreasonable search) by inspecting the content of the text messages sent by the sergeant. The conclusion of the Ninth Circuit Court of Appeals was that the city did in fact violate the sergeant's privacy, so it is the city that appealed the decision to the Supreme Court. While the issues at the heart of the case are the subject of considerable disagreement by legal theorists and privacy advocates, the particularities of this specific case may present the Court with an opportunity to settle the dispute without establishing a broad or significant precedent about privacy in the workplace.

In the United States, the general rule is that employees have almost no right to privacy in the workplace when using employee-owned equipment such as phones, computers, and other communications devices, as long as employees have been notified by their employer that monitoring of their communications is taking place. (The situation is drastically different in the European Community and other foreign locations, but of course the Supreme Court's jurisdiction does not extend beyond the U.S.) There are distinctions in U.S. law regarding whether the monitoring constitutes "interception" — such as listening in on calls or inspecting email in transit — in which case the U.S. Wiretap Act (for telephone calls) and the Electronic Communications Privacy Act (for electronic communication such as email) generally prohibits monitoring. Both of these laws contain exceptions for situations where monitoring is for ordinary business use and when prior consent to monitoring has been given by employees. In the Ontario case, the police department had a formal policy in place asserting a right to monitor electronic communications by employees, and employees were told explicitly that they had no expectation of privacy. That might have been the end of the story had not a somewhat contradictory informal policy been adopted by the SWAT commander to whom the sergeant reported, under which officers were told that if they paid for pager usage in excess of a 25,000 character monthly limit, their messages would not be inspected. Legal counsel for the sergeant argued, and the Ninth Circuit panel agreed, that the informal policy overrode the official one, and therefore the sergeant's Fourth Amendment rights had been violated under the provisions of the Stored Communications Act (18 USC 121 §§2701-2711). While the larger issue at stake is the extent to which government employees can expect their workplace communication to remain privacy, the Court may choose not to weigh in on this as part of this case. Their consideration is also likely to be limited to workplace privacy for government employees, although most of the relevant privacy laws also apply to private sector organizations.

Leaving aside for the moment the lascivious nature of the sergeant's personal communications (which would violate the acceptable use policies of many private and public sector organizations), the Supreme Court may choose not to make a legal interpretation on employee privacy in the workplace because the law in that area is already clear. The situation may have been more likely to come up in a government setting, given that not all private sector organizations have the same rules or practices involving stored communications and saving electronic messages that make the review of the sergeant's text messages possible. Employers have been fairly consistent in asserting their rights to monitor employee usage of employer property, but it is also common practice to allow occasional or incidental personal use of employer property, which makes it hard to draw a legal line between what constitutes appropriate use and what is too much. The content of the messages in this case make the conduct seems more egregiously inappropriate, but the Ninth Circuit panel at least thought it was unfair for the Ontario Police Department to tell its employees their communications wouldn't be inspected and then change its mind. In this context the take-away from this case may be less a reinterpretation of employee privacy rights in the workplace than a reinforcement of the need for employers to create, make employees aware of, and follow explicit acceptable use and privacy policies.

Is HIPAA enforcement getting any stronger?

Following the disclosure in November that employees at University Medical Center of Southern Nevada (UMC) have been sending patient information outside the hospital to personal injury lawyers and other outsiders, the FBI opened a criminal investigation into the systematic leads of patient data. According to reports in the Las Vegas Sun, one or more UMC insiders have been selling the daily patient registration forms from the hospital, — including names, birth dates, social security numbers, and medical condition information — so that personal injury lawyers could solicit clients. With the high level of scrutiny on UMC after the leaks became public, it seems the hospital has a less than stellar record complying with privacy laws, particularly including HIPAA.

In an interesting take on the issue, a more recent article in the Sun suggests UMC shouldn't be too concerned about the breach, noting the extreme rarity with which HIPAA violations have been punished in the years since the HIPAA Privacy Rule went into effect. While HIPAA enforcement history is a matter of public record and there is no question that the imposition of harsh penalities has been the exception, rather than the rule, among the provisions of the HITECH Act passed in February was the strengthening of penalties for HIPAA violations. These stronger provisions are noted in the Sun article, but the prospect of criminal prosecution isn't considered to be very likely. What this analysis overlooks is the specific language on HIPAA enforcement in the HITECH Act, which both requires a formal investigation and mandates the imposition of penalties in cases of "willful neglect" (HITECH Act Subtitle D, §13410). It's not trivial for investigators to show willful neglect, particularly proving that non-compliance was both known and ignored or insufficiently remedied in the past, but the early public information on this investigation suggests a long-term pattern of HIPAA non-compliance despite widespread awareness of HIPAA requirements by UMC staff. It seems it is cases just like this that the improved enforcement provisions of the law were intended to address.

Friday, December 18, 2009

The need to encrypt wireless data is a lesson still being learned

The Wall Street Journal published an article on December 17 reporting that the U.S. military has discovered that wireless video feeds from unmanned Predator drones operating in Iraq are often intercepted by enemy insurgents. The ability of insurgents to capture the wireless data is apparently facilitated by the fact that the video transmissions are not encrypted, allowing anyone in the geographical vicinity of the drones to intercept the video feeds using inexpensive commercially available wireless sniffing software. You might think that encryption would be an operational requirement for the wireless transmission of such sensitive intelligence data gathered in the field, but statements from defense and intelligence officials suggest that other functional priorities — such as transmission over large distances with potentially limited bandwidth — may have trumped security considerations. Most surprising is the acknowledgment by the military that the vulnerability exposed by using unencrypted transmissions has been known for nearly 20 years, yet still hasn't been mitigated, in part because U.S. military officials "assumed local adversaries wouldn't know how to exploit it."

This scenario exposes what must be a glaring weakness in the security posture for unmanned drones in terms of risk assessment, as any characterization of the threat environment in Iraq and other operational theaters appears to be underestimating the knowledge and technical capabilities of the adversaries representing threat sources to U.S. military operations. The military is now moving to upgrade the network infrastructure involved to add encryption to its wireless transmissions, although in a report from the Air Force that has drawn the ire of Congressman Jim Langevin and others, the work to add encryption to video transmissions from drones is not expected to be completed until 2014.

While the U.S. military places a great emphasis on information assurance and is often held up as an example of robust security practices, the long-term vulnerability with its video surveillance operations is reminiscent of widely publicized wireless data breaches in the commercial retail sector. Way back in 2002, large retailers began to implement security measures for wireless network communication within their stores. Short-range wireless transmissions without encryption were common practice at the time, for purposes such as communicating transactions between computerized cash registers and back-office financial management and inventory control systems. When retailers such as Best Buy discovered that hackers were intercepting customer credit card data by sniffing wireless traffic sent from point-of-sale terminals, they quickly moved either to encrypt their wireless transmissions, or (like Best Buy) opted to stop using wireless cash registers altogether.

More recently, TJX suffered an enormous data breach at its TJ Maxx stores, reported in 2007 but starting as early as 2005. The severity of the breach was attributed in part to the company's persistent storage of unencrypted customer data (in violation of the Payment Card Industry (PCI) Data Security Standard), but the attack was also enabled by the company's use of ineffective wireless security, including the use of Wired Equivalent Privacy and, in some cases, no encryption at all. The industry's response to TJX's breach has been to revise and strengthen PCI requirements and to adopt stronger wireless encryption where sensitive or personal information and transactions continue to be transmitted using wireless networks.

What all these cases have in common is a failure — made blatantly obvious only after attacks succeeded — to identify and implement appropriate security controls commensurate with the risk resulting from existing known threat sources and existing known vulnerabilities. It also seems likely that in all cases the failure in the risk analysis was mischaracterization or underestimation of threats, rather than an undervaluation of the impact associated with a breach. This type of mistake was acknowledged explicitly in the case of the U.S. military and its Predator video feeds, and is implied by Best Buy, TJ Maxx, and other retailers choosing not to use encryption to protect their wireless transmissions. The lesson here is simple: don't overlook any threat sources when assessing risk, and don't underestimate the capabilities of the threats that are identified.

Friday, December 11, 2009

If you use Facebook, don't wait to change your privacy settings

In a privacy policy change announced recently and effective on December 9, social networking supersite Facebook made significant changes to the default privacy settings for all Facebook users. In some cases the default settings announced disclose more information to more of the Facebook user population and expose that information to search engines like Google, while in other cases (at least according to Facebook's statements about the changes) they are merely continuations of existing disclosure standards, albeit now with more fine-grained access control settings available to users to constrain the visibility of their own information. The changes that have garnered the most attention in the press relate to a core set of personal information that Facebook now makes available to everyone, regardless of preferences users might have had for controlling disclosure of that information in the past.
"Certain categories of information such as your name, profile photo, list of friends and pages you are a fan of, gender, geographic region, and networks you belong to are considered publicly available to everyone, including Facebook-enhanced applications, and therefore do not have privacy settings. You can, however, limit the ability of others to find this information through search using your search privacy settings."
While the level and granularity of privacy settings where users may set preferences has increased, in addition to the basic set of information items now considered "publicly available" regardless of a user's preferences, some global privacy settings that were previously available to users have been removed, such as the single setting that used to allow users to prevent any of their information to be made available to Facebook applications. Most troubling among privacy advocates seems to be the explicit move by Facebook towards openly sharing users' information. Facebook has angered users in the past, saying at the same time that users "own all of the content and information you post on Facebook" but claiming unrestricted rights to do just about anything Facebook wants with that data. The company's stance has softened somewhat in the past few months, and language in the current privacy policy is not as strong as it was back in February, but it is also understandable that some users are considering canceling their accounts entirely in response to the latest changes and re-categorization of key profile information as "public."

Even among those aware that changes have occurred, many Facebook users may not realize that unless and until a user takes explicit action to modify privacy settings, the new changes have overwritten any previous disclosure preferences expressed by those users. The global default seems to make profile information and content users store on Facebook available to all friends and friends of friends (a setting Facebook calls "Friends and Network") which for many users is a substantial increase in the user population that now has access to their information. Also, because the changes went into effect for all users, the new settings remain in effect until a user changes his or her own privacy settings, something users are prompted to do the first time they log in to Facebook since the change occurred.

There is a precedent for Facebook reconsidering moves broadly deemed to be too invasive of privacy, and there are explicit terms within Facebook's Statement of Rights and Responsibilities (see part 13, "Amendments") that allow for unpopular changes to be put to a vote of the membership, although for the vote to be binding requires 30% of active users (or approximately 105 million based on current total user estimates) to participate. A couple of years ago, Facebook ultimately chose to cancel its controversial Beacon program after widespread outcry over the advertising application's reach into online behavioral tracking. It remains to be seen whether enough users are sufficiently upset by the latest Facebook changes to mount a coordinated effort to roll back to the previous privacy settings and approach.

House passes Data Accountability and Trust Act

Legislation passed by the House of Representatives this week (H.R. 2221, the Data Accountability and Trust Act) includes provisions both for national standards on data breach notifications and adding new responsibilities and consumer empowerment protections to require data brokers and other holders of personal information to verify the accuracy of the information they hold on individuals.

With parallel action on data breach disclosure bills in the Senate, a lot of the current coverage on the House passage focuses on the breach notification provision in H.R. 2221, which simply and clearly says that anyone
"that owns or possesses data in electronic form containing personal information shall, following the discovery of a breach of security of the system maintained by such person that contains such data notify each individual who is a citizen or resident of the United States whose personal information was acquired or accessed as a result of such a breach of security." (H.R. 2221 §3)
The proposed law extends breach notification requirements beyond the owners of the data to third party agents who maintain or process the data or service providers who transmit, route, or store the data. In cases involving more than 5000 individuals the notification must be made not only to the individuals affected and the Federal Trade Commission, but also to the major credit reporting agencies. Unless a delay in notification is warranted by law enforcement or national security concerns, notifications are to be made within 60 days of the discovery of the breach.

Separate language in Section 2 of the bill addresses requirements for ensuring the accuracy of personal information collected, assembled, or maintained by an information broker, and for providing access to consumers to review (at least annually) the personal information about the consumer held by the information broker, and to post instructions for consumers explaining how to request access to review their information. There is also a provision, consistent with most major privacy principle frameworks, that requires information brokers to correct any inaccuracies in personal information, and specifically obligates them to make changes in the data communicated to them by individuals whose data they hold, as long as the individual's identity is verified and the request isn't believed to be frivolous or irrelevant. Even in cases where the broker believes the information to be correct, where the disputed information isn't part of the public record, at minimum the information broker must note the disputation and make an effort to independently verify the information. Despite the potential for difficulty with subjective terms like "irrelevant," this provision gives the presumption for saying what is accurate to individual consumers, rather than the information broker. The only exception is when the disputed information the broker has is part of the public record (and has been correctly reported matching the public record), in which case the broker is required to tell the individual where he or she should direct a request to correct the information in the public record.

Holding data brokers accountable for making sure their data is accurate before it gets sold or passed on to other entities who might assume the validity of the data is a step in the right direction towards creating mechanisms for asserting data integrity. Such assertions would raise the confidence level receivers or secondary users of information might have when making decisions or otherwise using the information they receive. The lack of any sort of statement (much less a guarantee) of the accuracy of data used in information exchanges can invalidate data analyses based on data of unknown integrity and can lead to erroneous decisions. In the health information exchange context, for instance, these errors can and do cause real harm, such as when the wrong medication doses appear in health records. This problem certainly exists in paper-based record-keeping, but as more and more industries move towards electronic data exchange and data integration solutions, any assumptions about the integrity of the data received through electronic channels are just those — assumptions. Making data owners and aggregators responsible for determining the accuracy of the information they hold should in theory improve the integrity and therefore reliability of the information they sell. In this sense the legal requirement, if enacted, could actually improve the saleability of the data offered by information brokers.

Data loss lessons from TSA disclosure

As reported on Wednesday in the Washington Post and elsewhere, the Transportation Security Administration (TSA) inadvertently disclosed sensitive information about its airline passenger screening practices by posting a document containing this information online. The mistakes involved occurred at several levels, including human errors and poor choices in technology, so even where it seems the TSA was trying to do things the right way (recognizing the sensitivity of the information and therefore redacting the secrets before publishing it), the net result is the same. The TSA's unfortunate experience illustrates several considerations of which any organization managing and using sensitive data ought to be aware.
  • Understand that data is an asset, and must be treated and protected as such. This is especially true of sensitive information like the TSA's ostensibly secret procedures and guidelines, and of intellectual property any organization has the comprises information about confidential business strategies, operational details, or competitive advantages.
  • Know what data you have, and attach data sensitivity categorizations to it. Pretty much everyone is familiar with the military classification system, but in any context it is important to be fully aware of what data you have, the nature of that data, where it is stored, and what it's sensitivity level is, whether that's based on internal value or on the potential impact to the organization should the information be disclosed.
  • Where sensitive data must be shared, take appropriate measures to ensure only those with appropriate authorizations can access it, including the use of encryption and other approaches to protect data in transit, in use, and at rest.
  • Choose appropriate tools and technologies to protect sensitive data. Even without knowing the specific technology used to redact the sensitive material in the TSA document that was published, what is clear is that the underlying data wasn't changed, but some sort of digital mask or overlay was put in place. Using a graphical blackout function may be fine to prevent "shoulder surfing" in much the same way a password field in an online form shows "******" instead of the characters actually entered, but is not the same thing as rendering the data unreadable. An "old school" approach such as blacking out the sensitive information in a paper copy of the document and then scanning the redacted version to create a digital copy seems unsophisticated, but would not have allowed the disclosure that occurred using whatever digital redaction tool TSA employed.
  • Monitor the flow of information out of your organization. If the simple exercise of copying redacted text and pasting into a different application was sufficient to expose the sensitive data, it's hard to imagine that a content inspection tool through which the TSA document might have passed wouldn't have been able to recognize that the full contents of the document were in fact readable. This is not intended to be a wholesale endorsement of content inspection or any data loss prevention (DLP) technology, but in cases like this where personnel are trying to follow policy and just happen to do that with an ineffective tool, a secondary line of defense provides an added measure of assurance.
Perhaps more disappointing than the disclosure itself is the response to the incident by TSA and DHS officials, who suggested that since the document was widely circulated among airline industry organizations that this new disclosure did not represent significant new risk to airline safety. This is essentially saying that since lots of (authorized) people have access to the information already, it probably isn't that hard for an unauthorized person to get access to it. If that is really the case, then the TSA isn't doing enough on its own or with its industry to secure its sensitive information.

Thursday, December 10, 2009

Progress in security health records, but still a long way to go

An excellent article this week in InformationWeek by Mitch Wagner provides an nice overview of the privacy and security issues related to widespread deployment of electronic medical records, noting both the recent progress made in these areas and highlighting key challenges that remain. Some of the new privacy rules put into place with the HITECH Act portion of the American Recovery and Reinvestment Act — such as the application of HIPAA enforcement and penalties against individuals, rather than just organizations — are accurately characterized as incremental but still important steps in reaching the point where all personal health information is protected by the appropriate policies and safeguards, including technical controls to make sure those policies are actually followed. Similar steps to strengthen rules such as accounting of disclosures (basically keeping track of all the times and circumstances an individual's health record is accessed) and ramp up enforcement mechanisms available to the government agencies responsible for investigating violations of the laws, should in the aggregate help consumers feel at least a little more comfortable about having their personal medical data stored electronically. With the additional attention now being placed on collecting and honoring patient preferences for information disclosure — in the form of explicit consent — it appears that the people responsible for working to overcome some of these challenges do understand the nature and extent of the problem, and continue to solicit input and collaboration from all sides of the issues. It remains to be seen whether the privacy and security concerns can be mitigated sufficiently to allow the rollout of electronic health records to proceed on the timetable set by the current administration.

A follow-up article by Wagner addresses many of the same issues, but provides more perspective on privacy concerns, especially opinions by some privacy advocates that the privacy measures to date (even the enhanced ones in the HITECH Act) just don't go far enough. The Health IT privacy debate provides an interesting contrast to similar but differently focused conversations about societal expectations about privacy sparked by Facebook's recent change in privacy policy.

Wednesday, December 2, 2009

Sometimes a breach is data theft, sometimes it's business as usual

Among the latest unauthorized disclosures of personal information making headlines is the admission last week by T-Mobile that thousands of its British customers had essentially become pawns in a "black market" for mobile service subscriber information sold to T-Mobile competitors. It seems that one or more T-Mobile employees sold lists of subscribers nearing the end of their contracts to other mobile service providers; the T-Mobile customers were then contacted by salesmen for the competing carriers who tried to get the T-Mobile customers to switch providers. This case raises a couple of interesting ideas in the debate over the protection of personal information.

While it appears clear from statements from T-Mobile and U.K. authorities that the incident described represents data theft from T-Mobile and is therefore illegal, without the key element of rogue employees misusing corporate data assets for their own gain, the nature of the data sale by itself would not necessarily violate current privacy laws, particularly those in the U.S. that are generally less stringent than data protection regulations in the European Community. The data disclosed — name, mobile number, and contract expiration dates — certainly comprises personally identifiable information (PII) under just about any current definition of the term. The specific data fields in question however are not ones usually characterized as "sensitive" in domain or regulatory contexts such as financial services, health care, education, or public records, although most people do treat mobile telephone numbers as more private or sensitive than landline numbers, in part because mobile numbers are not generally available through public directories. If the sale of the customer data had taken place above-board, conducted by authorized T-Mobile personnel (for instance, to an affiliated third party such as a mobile handset vendor), it's not at all clear that such a disclosure would violate any American privacy laws (British privacy laws, like those generally applicable in the EU, tend to require customer consent or "opt-in" before any secondary use or additional processing of personal information, even by the company that collected it). Take a look at the privacy policy of just about any large consumer bank or retailer and you will see language asserting a right to share personal customer information with third parties. For example, the Citibank privacy policy for Citi.com states, "Information collected by a Citigroup affiliate through Citi.com may be shared with several types of entities, including other affiliates among the family of companies controlled by Citigroup Inc, as well as non-affiliated third parties, such as financial services providers and non-financial organizations, such as companies engaged in direct marketing." So according to such a privacy policy, and in full compliance with FTC rules, an American company could do what T-Mobile's thieving employees did without violating any laws or regulations. If you're thinking, "that doesn't seem right" then you are seeing the implications of the sectoral approach to data privacy in the United States, in strong contrast to approached favored in other parts of the world, particularly the European Union's Directive 95/46/EC.

Policies without enforcement simply aren't enough to guard against internal threats

Two recent studies of financial sector employees, sponsored by security vendors Cyber-Ark and Actimize, and reported last week by Tim Wilson of InformationWeek, indicate that employees are ready and willing to steal information from their employers, even though they know such actions violate laws as well as company policies. Taken together with some findings from the 2009 Computer Crime & Security Survey from the Computer Security Institute (results were presented yesterday in a CSI webcast, and will be released publicly on December 8 from www.gocsi.com), it's clear that even when security awareness is made a priority, organizations need more than rules and policies or even laws to protect themselves from insiders.

Interesting results from the survey include a rise in malware and disruptive intrusions, at least in terms of the proportion of respondents experiencing such incidents, including denial of service attacks. Based on information about organizational responses to security incidents, the primary approach to security among surveyed organizations continues to be reactive, with security awareness a weak spot. As often highlighted in the context of laptop thefts and other high-profile data breaches, unauthorized disclosures are often the result of employees knowingly violating existing security policies, whether for convenience, through negligence, or for malicious purposes. Even the best-intentioned employees may need the reinforcement of technical measures to enforce what's stated in policies or regulations. When companies are provided credible information indicating employees will disregard the rules if and when it suits them, the need for data loss prevention and similar safeguards cannot be made more clear.

Structural reorganization announced for government health IT oversight

In a Federal Register notice effective December 1, the Office of the National Coordinator for Health IT announced a reorganization of its office. Among the most notable changes is the decision to create the position of Chief Privacy Officer and a supporting office within ONC to address "privacy, security, and data stewardship of electronic health information" and serve as a point of contact and coordination privacy officials in domestic and international government agencies at all levels. Privacy has long been an emphasis along with information security for the National Coordinator, with ONC releasing its Nationwide Privacy and Security Framework for Electronic Exchange of Individually Identifiable Health Information a year ago. This latest development is an explicit acknowledgment of the central position occupied by privacy and protection of individually identifiable health information in the pursuit of health IT adoption and interoperability.

Another structural shift is the creation of an Office of Economic Modeling and Analysis to apply formal economic perspectives to aspects of the health care system to help justify investment in health information technology and assess different health information technology strategies and policies intended to promote health IT adoption and use. The idea appears to be to provide more quantitative information about ways to improve health care quality and efficiency, a side benefit of which might be to help identify operational business models and value (revenue) propositions to encourage adoption of health IT, not just incentives to start using the technology.

From a more practical standpoint, the reorganization should align ONC resources to help it better manage and oversee the significant funding flowing through ONC due to the provisions of the American Recovery and Reinvestment Act. Specific offices within ONC will also take responsibility for scientific research, grant programs, and new health IT developments, and for program oversight, internal office management, and budgeting. The office's operations and ONC's continuing work on health IT standards have been elevated to the responsibility of a Deputy National Coordinator.