Work with federal information systems? Responsible for risk management, continuous monitoring, or FISMA compliance? Check out my book: FISMA and the Risk Management Framework.

Friday, January 29, 2010

A sampling of privacy news from Data Privacy Day

Whether it's coincidence or intentional, with January 28 being Data Privacy Day in the U.S. and Data Protection Day in Europe, there was a lot going on in information privacy. The single most consistent focus of concern appears to be Facebook, a company mentioned by name by both European Commissioners and FTC Commissioners, and against which Canada's privacy commissioner launched a new investigation this week.
  • During her keynote speech at Data Protection Day, European Commissioner for Information and Society Viviane Reding called for a new approach of "privacy by design," in which organizations "to improve the protection of privacy and personal data from the very beginning of the development cycle." Reding also indicated that the European Commission would plan to move forward with a proposal on ways to reform the General Data Protection Directive to strengthen data protection laws and make them more consistent across Europe.
  • The European Commission also formally initiated legal action against Italy for violating EU privacy rules, specifically the Directive on Privacy and Electronic Communications, stemming from a practice where information taken from public directories is being used to create telemarketing databases, without the consent of the individuals whose information is being aggregated and used for this purpose.
  • The Federal Trade Commission held its second of three planned round table discussions on Exploring Privacy in California, where FTC Commissioner Pamela Jones Harbour also advocated privacy by design, particularly in the context of online privacy protections. Panelists at the event raised concerns about the apparent ease with which online data can be matched, so that supposedly anonymous data can be accurately associated with individuals.
  • As reported by eSecurity Planet, U.S. Representative Rick Boucher, chairman of the House Subcommittee on Communications, Technology and the Internet, said publicly at a Congressional Internet Caucus event on Wednesday that he is nearly finished drafting new online privacy legislation, to address data collection and consumer protection practices, particularly for online marketing such as targeted advertising.

Excitement plus a little security trepidation about the iPad

Just a day after Steve Jobs presented the highly-anticipated iPad at an event in San Francisco, security analysts are raising concerns about malware, browser and other application attacks, phishing, and other threats seen against the iPhone. The full technical details on the new tablet device are not well known, so some of the likely issues raised are based on speculation and assumptions, but the popularity of the iPhone has already made it something of an exception to the usual rule of thumb that there are far fewer security threats for Apple computers that there are for PCs. The iPad shares technical characteristics of an iPhone (with a new processor designed just for the iPad), and will presumably run many of the same sorts of applications currently sold through Apple's Apps Store. The potential for exploitation using the sort of attacks often mentioned are certainly not limited to Apple's products, but devices like the iPhone and the iPad appeal to both Mac computing enthusiasts, non-Mac techies, and far less technical segments of the general population. The prevalence of security threats historically is correlated to the popularity of the devices or platforms targeted for exploitation, so the types of attacks that have been launched against the iPhone are driven in part by the popularity of the product, as well as by exploitable vulnerabilities. To the extent that the iPad adopts security features seen in the iPhone (or also lacks capabilities not available with the iPhone like remote disablement), the same sorts of attacks are likely to hit the iPad. This is of course all speculation pending the release of more complete technical details of the product, particularly in the security arena, but regardless of the validity of recommendations security analysts or vendors might make, so far it seems unlikely that the iPad would be targeted at enterprise use any more than the iPhone has been.

Thursday, January 28, 2010

Federal courts again dismiss claims related to NSA warrantless wiretapping

In the latest setback for the Electronic Frontier Foundation and its efforts to hold the National Security Agency accountable for its mass surveillance of phone calls and emails, a federal district court dismissed with prejudice two actions filed by the EFF on behalf of American citizens. The U.S. District Court for the Northern District of California ruled that the plaintiffs' claims were not "sufficiently particular to those plaintiffs or to a distinct group to which those plaintiffs belong," but instead constitute a "generalized grievance shared in substantially equal measure by all or a large class of citizens." Writing for the court, Justice Vaughn Walker cited as precedent a finding in Seegers v. Gonzales: "injuries that are shared and generalized — such as the right to
have the government act in accordance with the law — are not sufficient to support standing." This essentially means that the courts have found that since the government is monitoring everyone(the case uses the participation of AT&T to extrapolate to all major telecommunications providers; EFF has focused its legal action on AT&T among telcos due to the existence of documentation leaked by a former AT&T employee that ostensibly shows that AT&T participated in the illegal wiretapping program), the surveillance can't be prevented by the courts, even if it is illegal.

This is the second legal defeat for the EFF in the past year. Last June, the same judge in the same federal district court ruled in favor of AT&T as a defendant in Hepting v. AT&T, in which the EFF sued the telco giant for cooperating in the NSA's warrantless wiretapping program. Initially filed in 2006, the case had made it to an appeal to the 9th Circuit in 2007 before the government, in enacting the FISA Amendments Act of 2008, granted retroactive immunity to telecommunications companies that had violated the Foreign Intelligence Surveillance Act (FISA). It's strange enough to make sense of a legal construct that at once forbids warrantless wiretapping and forgives it as long as it is conducted broadly enough; it's particularly hard to fathom in the context of government-sponsored monitoring of personal communications sparked by the recent Chinese-based hacking incidents.

Wednesday, January 27, 2010

Privacy law in the 21st century: due for an update?

In honor of Data Protection Day (tomorrow, January 28) and its "Think Privacy" theme, let's turn our attention to a few current efforts to bring legislated privacy requirements into the 21st century. In Europe, privacy watchers are looking to Viviane Reding, the European Commission's commissioner for information society and media, who has stated publicly that protection rights for personal data are among her top priorities. Now entering her third term in office, Reding has been appointed the commissioner for Justice, Fundamental Rights, and Citizenship for the EC's 2010 session, and unnamed officials (the European press likes to use those unnamed sources too) purportedly close to Reding have suggested that one area of focus will be a review of the EU's Data Protection Directive, which among other provisions constrains the collection and use (the broad general term in the EU law is "processing," which encompasses more than two dozen operations in the official definition) of personal data by EU member countries. The Data Protection Directive was enacted 15 years ago, so it would seem that a least some European commissioners think it might be due for revision, or at least a close look to see if it covers modern information usage.

In the United States, one of the central privacy laws is the Privacy Act of 1974 that constrains U.S. federal government activities related to data collection, use, and disclosure. The Privacy Act has been amended since its enactment over 35 years ago, typically in cases where the advance of technology creates gaps in the law that Congress needs to fill, as was the case with the Computer Matching and Privacy Protection Act, which in 1988 amended (and became part of) the Privacy Act of 1974 to constrain the use of personal data in automated matching programs. In recent years both government and private sector bodies have called for revisions to the Privacy Act due to the significant changes both in information technology used to collect and process personal information and to evolving threats to privacy enabled by technology (identity theft, for example, has existed for many years but did not provide thieves the opportunity for substantial financial gain prior to the advent of automated banking technology). Last May, the Information Security and Privacy Advisory Board released a report including a recommended framework for federal privacy policy in the 21st century. Also in process in both houses of Congress are bills that, among other provisions, would strengthen data protection standards in areas such as breach disclosure requirements and consumer empowerment. There are of course many important issues competing for government attention, but as the continued pace of technical change outstrips technical, policy, and regulatory governance mechanisms, it becomes more critical that the legal framework is adapted accordingly.

If you build it, will anyone come?

In all the discussion about health information exchange and electronic health records and establishing trust among public and private sector organizations, what's often lost is the voice of the consumer. The goal of widespread EHR adoption is usually expressed not in terms of the number or percentage of health providers, insurance plans, or government agencies that will be using the systems, but instead in terms of what proportion of health records are stored in electronic form, with a vision articulated in January 2009 that all U.S. residents would have electronic health records by 2014 (the same deadline that President Bush intended with his 2004 executive order seeking the same goal of widespread adoption of EHRs). Significant federal funding has been allocated through the American Recovery and Reinvestment Act to provide financial incentives to health care providers to implement and use EHR technology, although adoption rates in the United States, while improving, are still well short of a majority, and full penetration within the next four years seems a very ambitious objective. One factor contributing to the lack of progress on EHRs may be patients themselves: the results of a study by the Ponemon Institute released this week suggest that few Americans have sufficient trust in either the federal government or industry to store and access their personal health data. The Office of the National Coordinator within HHS has for a couple of years been focusing on ways to capture, manage, and honor consumer preferences about disclosing personal health information, but to the extent this survey reflects public sentiment, the unwillingness of individual consumers to allow their health information to be shared may present just as significant a barrier to realizing the health information exchange vision as an7 of the organizational-level issues. Overcoming this resistance will require significant consumer education and outreach to be sure, but the effort could be facilitated by doing more to demonstrate that all appropriate measures are being taken to ensure the privacy and security of personal health information.

Monday, January 25, 2010

Side-effect of the "instant information" world: frequency trumps accuracy

In a coincidental reinforcement of a point we raised recently in a different context about the difficulty of establishing the credibility of information found on the Internet, a reliance on unsubstantiated claims and poorly verified (or unverified) information seems to be at the heart of some of the recent criticisms of the intelligence communities failure to "connect the dots" and prevent the would-be Christmas Day airline bomber from boarding the flight from Amsterdam to the U.S. In response to a detailed listing of "articulable facts" about the bombing attempt proposed by Bruce McQuain to refute testimony before Congress by FBI Terrorist Screening Center Director Timothy Healy that there was insufficient factual information to provide "reasonable suspicion" about the underwear bomber, Kevin Drum of Mother Jones offers a point-by-point response backed up by news reports and other evidence. (The point-counterpoint came to our attention via security expert Bruce Schneier.) Perhaps the most interesting of the points incorrectly asserted by McQuain (and many many others) is the claim that Abdulmutallab was traveling on a one-way ticket (which therefore should have served as a red flag). This claim, first asserted on the day of the attack and widely repeated by just about every reputable news source covering the story, turns out not to be true, though despite corrections made by the New York Times, MSNBC, and others, the false claim continues to appear in published reports.

So the message here is simple: when you read a claim, you have to look for the evidence, and if there isn't any, it's a mistake to rely on the information as factual, no matter how logical it sounds or how reputable the source is considered to be. In theory this should be easier to avoid for those posting information online, because adding hyperlinks to reference sources is a simple matter. The more information gets passed around, however, the more likely it is to lose the traceability to sources that helps determine its validity. For a recent example we need look no further than Bruce Schneier once again. In a recent essay on the Google-China hacking incident Schneier refers to reports that China used long-existing "back doors installed to facilitate government eavesdropping" (the "government" in this statement is the American one, not Chinese), and the article embeds links to more than one published story as well as some of his own previous writing to provide evidence for the assertion. When CNN.com picked up the piece and ran it, none of the supporting evidence (or more specifically, links to it) was included with the story. So a reader on CNN.com would see a strong but unsubstantiated assertion that the attacks on Google were actually facilitated by a legally-required back channel maintained by Google to allow access by law enforcement authorities. The existence of and exploitation of the "internal intercept" access back-channel is attributed by Macworld only to an anonymous "source familiar with the situation" — a familiar phrase in the press, but one not particularly useful in assessing the credibility of a claim.

Sunday, January 24, 2010

Microsoft pushing hard on privacy in the cloud

Whether due to clever marketing objectives or to its stated commitment to making privacy a core consideration for its products and services, there's no denying Microsoft is emphasizing privacy across multiple dimensions. Taking center stage this week was a recommendation to Congress (articulated in a speech given January 20 at a Brookings Institute policy form on Cloud Computing) that new legislation is needed on cloud computing security and privacy. Microsoft went so far as to propose a name — the Cloud Computing Advancement Act — for the new legal framework the company says is needed, as well as to advocate revisions to existing privacy legislation including the Electronic Communications Privacy Act and the Computer Fraud and Abuse Act. The speech offered justifications for explicit cloud computing regulation in the form of survey (commissioned by Microsoft) indicating a large majority of business leaders and consumers — even those enthusiastic about cloud computing's potential — are concerned about data security and privacy in the cloud. Microsoft is also recommending a "truth in cloud computing" provision that would mandate more explicit disclosures by cloud service providers about the security and privacy measures they have in place. Cloud computing is currently the primary area of emphasis in Microsoft's privacy advocacy directed at government officials and policymakers. Microsoft's efforts illustrate one way in which private sector vendors with a stake in cloud computing are moving ahead on privacy, while in contrast federal government efforts to date have largely focused on clarifying definitions of cloud computing services and examining ways to securely use those services. Whether or not Congress decides to take Microsoft's recommendations to heart, some additional direction to NIST to address privacy in the cloud might be reasonable.

Thursday, January 21, 2010

Reminder: not everything you read on the web is accurate

In a post a few days ago meant to highlight the recent attacks on Google and many other companies as a textbook example of the advanced persistent threat, we cited zero-day exploits in Microsoft and Adobe software programs (in addition to really well-crafted phishing attacks) as evidence of the complexity and sophistication of the attacks. Not 12 hours later we received a very polite (really) email from Adobe pointing out that security vendor iDefense had withdrawn its initial assertion that the attacks used PDF file payloads to exploit vulnerabilities in Adobe Reader, and asking that we edit the post. We did, primarily because the last thing we want to do in this forum is convey inaccurate information, and in this case, the source of the information itself provided the retraction. However, in an article on the Google attacks in the January 11 issue of Information Week, writer Kelly Jackson Higgins quotes Mikko Hypponen of F-Secure, who claims that PDF files were sent to phishing attack victims, and when these attachments were opened they used a zero-day exploit in Adobe Reader to install a Trojan horse on the vicitims' computers. F-Secure has also posted copies of subsequent phishing emails that use the attack incident itself as a subject to get recipients to open the malicious PDF attachment (the vulnerability in question has been around for about a month and was patched last week by Adobe).

The point of the post that originally referred to sources mentioning Adobe exploits was not meant to criticize the company or its products (many of which we at SecurityArchitecture.com use every day), and the point of this post is not to suggest that Adobe was right or wrong to object about their products being associated with the Chinese attacks (although our post was written several days after Adobe published its security bulletin on the vulnerability). What this situation highlights is that it's hard to know how to make sense of potentially conflicting information on the Web, even when you leave bloggers and Twitterers out of the mix and look to reputable security vendors and media sources.

Trustworthiness (or more specifically perceived trustworthiness) of information is a constant theme online, whether in the context of social media or electronic equivalents of conventional news media and personal communications. The level of personalization reported with the Chinese attacks is remarkable in this regard. Even with heightened security awareness and sensitivity to phishing, spyware, and malware attack attempts, it's not hard to imagine how these victims were compromised. These were not the shotgun-approach mass emails purporting to be from EBay or Bank of America; the attackers harvested names, contact information, and email addresses from individuals and organizations with which the victims were already familiar, and crafted fake emails using subjects and content personally relevant to the recipients. How many of us would think twice about opening a PDF attachment (not a .zip or and .exe or a .vbs mind you) seemingly on a directly relevant topic and apparently coming from a known business or personal associate? Formal models exist to manage information flows between different levels of trust, most notably the Biba integrity model, adaptations of which are used in Microsoft Windows and Google Chrome as well as many other systems. Of course, formal integrity models like Biba basically say you shouldn't rely on any information where the trustworthiness of those who write the information can't be confirmed (think Wikipedia). More practically, fundamentals of security awareness tell us not to open files received from unknown or untrusted sources, but as the spear-phishing attacks demonstrate, that's not always as easy to do as it sounds.

Not everything Google does is related to China dispute

Without questioning the severity or significance of the Chinese attacks on Google and other companies, the huge attention focused on this incident seems to be influencing all coverage of Google, whether or not the topic in question has anything to do with the attacks.

It still seems a bit coincidental that Google's stated intention to stop censoring search results on Google.cn was sparked by the recent attacks. Perhaps the hacks were just the straw that broke the camel's back, but as long as three years ago Google's executives had publicly questioned the wisdom of the company's decision to support the content censorship requirements demanded by the Chinese government when Google first entered the China market. The conventional wisdom on Google's decision to end its censorship program is generally positive, apparently even if it means Google will have to cease business operations in China. There is a vocal minority however that is suggesting Google's decision is driven more by conventional business factors (market share, growth potential, etc.) than by moral or ethical principles.

The situation so dominated coverage of Google and other affected companies over the last several days that in reporting other significant actions or announcements made by Google, many in the IT press can't seem to help drawing associations to the attacks even where none exist. A news summary distributed by IDG News Service yesterday is a good example: in the summary of an article about Google's plan to propose that the European Union's Article 29 committee create a security and privacy panel, the China attacks were mentioned as a driver for the proposal:
Google says that the recent hack of its Chinese operation shows why it needs to retain user search data and will this week call on the Article 29 Working Party to establish a privacy and security panel to encourage productive dialogue on the proper use and protection of such data, PCWorld reports. "You can't discuss privacy in a vacuum," said Google global privacy counsel Peter Fleischer. Google retains search users' full IP addresses for nine months. "We find it incomprehensible that a company would throw away useful data when holding it poses no privacy threat," Fleischer said.
The above version of the summary was included in the January 20 Daily Dashboard of the International Association of Privacy Professionals. The following day's edition included a note that IDG News Service had modified the story because, as IDG explained it, "Due to a misunderstanding with a source, the story posted linked Google's stance on retaining search data with unrelated attacks on its corporate infrastructure." It's hard to fault anyone in the trade press for having the China attacks on their minds whenever they hear "Google" but perhaps the members of the media would do well to remember the maxim from statistics: correlation ≠ causation.

Wednesday, January 20, 2010

IronClad's "PC on a stick" could be a benefit or a threat

Defense contracting giant Lockheed Martin announced the general availability of its IronCladTM secure USB drive as a fully self-contained PC, containing operating system, applications, and data all within a flash drive form factor that presents the ultimate in portability. This is the latest innovative use of the Ironkey secure USB device, which to date has been positioned in the market largely as a highly secure portable storage device. The IronClad "PC on a stick" is designed to let a mobile user plug in to any client computer platform to leverage the I/O and connectivity of the host while bypassing the host's hard drive. Lockheed suggests that this optimizes mobile connectivity by turning any borrowed PC, workstation, or computer kiosk into a secure personal computing platform. Because no access to the host hard drive is needed, the company also claims that no evidence of IronClad's use will be left behind.

To be clear, Lockheed does specify the minimum requirements necessary for IronClad to use a host computer, notably including a BIOS that supports booting from USB, and presumably organizations that have implemented USB device blocking or port restrictions will not be at risk for IronClad users gaining unauthorized access. However, to the extent that USB drives already present a security risk as an mechanism for data theft, it seems that be able to carry a fully functioning PC on a flash drive (instead of just storage capacity) raises the bar substantially in terms of potentially needing to guard against the use of these devices. IronClad appears targeted to enterprise users as an alternative to some routine laptop uses, including remote device management and security administration functions including remote destruction of flash drive contents. There is no reason to assume that an IronClad user would be any more able to gain unauthorized access to a network using USB device than someone with a laptop — access to a connected host computer is still required, so the only practical difference with IronClad is you appropriate a USB port instead of borrowing a network cable. It is less readily apparent however if an individual user of the device might be able to configure it to help gain access to "guest" network environments. The product marketing information most directly emphasizes using IronClad in a way that turns a public or shared computer into a secure virtual desktop, but the company's emphasis on "leaving no trace" should sound attractive to attackers who value stealthiness. Presumably the device's built-in remote management features and ability to use physical network connectivity of its hosts would also result in the sort of data stream that an IDS, event log monitor, or SEIM tool would be able to identify. In this context the potential attempted unauthorized use of an IronClad device is no different as a security event than any conventional use of third-party client computers, and should be monitored and guarded against in the same way.

Tuesday, January 19, 2010

Healthcare providers missing the mark on risk assessments

As the comment period continues for the recently published proposed rules and draft certification criteria and standards associated with "meaningful use" of electronic health records, it appears that a large proportion of healthcare providers are not prepared to comply with the one meaningful use measure related to security and privacy that has been proposed as a requirement for 2011. In comments reported last week, members of the Health IT Policy Committee working with the Office of the National Coordinator at HHS cited a survey that found 48 percent of responding health providers do not perform risk assessments. The stage 1 (2011) measure associated with the health outcomes policy priority for privacy and security ("Ensure adequate privacy and security protections for personal health information") found in the Notice of Proposed Rulemaking says simply that EHR users must conduct or review a security risk analysis and implement security updates as necessary. This and other measures demonstrating meaningful use must be met by providers to receive incentive payments for adopting electronic health records.

At first glance the security and privacy bar appears to have been set quite low (there is a separate list of security functionality that a certified EHR system must be able to perform), especially since risk analysis is something covered entities like providers are already required to do under HIPAA rules. Among the requirements of the HIPAA Security Rule — which went into effect in 2003 and with which compliance has been required by covered entities of all sizes for almost four years — is an Administrative Safeguard for risk analysis: "Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity" (45 CFR 164.308(a)(1)(ii)(A)). Without getting to the heart of why so many providers have yet to implement the practices to meet this HIPAA requirement, practically speaking this means they now have another 18 months or so to get their security houses in order. There's been some discussion among ONC's advisory committees as to what specifically a compliant risk analysis must entail, and there is not as yet a corresponding standard to provide that specificity. From a government perspective agencies likely have to look no further than NIST and its Special Publication 800-66, "An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule," which addresses conducting risk assessments using a process adapted from NIST's own risk assessment process documented in Special Publication 800-30, "Risk Management Guide for Information Technology Systems." This guidance might be less comprehensive than risk assessment practices found in other security management or IT governance frameworks, particularly as 800-66 constrains the risk assessment to consider only the risks of non-compliance with the security standards general rules for electronic protected health information (PHI). Because the financial penalties for HIPAA security rule non-compliance are relatively minor, and the criminal penalties are rarely sought, the most relevant risks for a private-sector covered entity might be business consequences like negative publicity or the loss of customers.

The point is, healthcare providers and other covered entities have access to many available approaches, methodologies, and process standards for risk analysis, yet to date do not appear to be using any of them. In order to avoid falling short on meaningful use, these organizations need to set in motion the process of changing their security program operations to make routine risk analysis an integral component. Using the HIPAA Security Rule requirements as a precedent, the specifics of what a risk analysis must include may not be enumerated in exhaustive detail, so following just about any accepted risk analysis standard has a good chance of being compliant, whether choosing to go with NIST, or ISO 27005, or COBIT, or ITIL, or a comparable approach.

Sunday, January 17, 2010

Sophistication, severity of attacks on Google raise visibility of advanced persistent threat

The recently disclosed hacking attacks suffered by Google and many other companies are noteworthy not just for the high profile of the victims, but also for the sophistication of the attacks, which have been described as a combination of highly targeted phishing attempts coupled with exploits of software vulnerabilities in popular programs Microsoft Internet Explorer (to be clear, while Microsoft has acknowledged IE vulnerabilities were likely used, no confirmation exists that Adobe products were exploit vectors in the attack; iDefense originally asserted that Adobe Reader was used to effect the attack, but upon further investigation subsequently withdrew their claim). Even without the use of zero-day exploits in the attacks, the specificity of the phishing messages and the recipients to whom they were addressed apparently greatly enhanced the success of the attack. The reported use of different malware payloads sent to different intended victims and the advance step of gathering specific recipient email address lists both differentiate these "spear-fishing" attacks from run-of-the-mill phishing attempts using mass distribution.

Some security experts have pointed to the Google-China incident as the most visible recent example of the "advanced persistent threat," in this case represented by whatever hacking capacity (whether explicitly government-sponsored or otherwise) was able to carry out the attacks. Taosecurity blogger Richard Bejtlich is among the leading online voices drawing attention to the problem of the advanced persistent threat, having noted in the past that even where this sort of threat is acknowledged it is not always specifically identified or described with the same terminology. As described by security services and incident response product vendor Mandiant, the advanced persistent threat is characterized more by its "perseverance and resources" than by its use of special or unique attacks, requiring a commensurate level of sustained defensive and responsive activity from organizations targeted by the advanced persistent threat. The attacks on Google show evidence of significant resources dedicated preparing for and executing intrusions, and perhaps more troubling show a level of creativity in crafting new and unique attacks that may them even harder to defend against. Lastly, the key weaknesses exploited in the attacks on Google and others were not in the target organizations' network or systems infrastructure, but instead were both human (user) and technical vulnerabilities exploited through ancillary attack vectors. The continued analysis of and response to this incident, including the U.S. intention announced by the State Department to issue an official protest, suggests that these attacks have raised the bar on cybersecurity, likely for the foreseeable future. Only time will tell if this results in permanent, tangible changes in the use of tools, tactics, or approaches on cybersecurity.

Friday, January 15, 2010

Chinese cyber attacks on Google and others may accelerate Congressional action on cybersecurity

One effect of Google's public disclosure of hacking attempts ascribed to the Chinese government appears to be a greater sense of urgency in Congress to enact new cybersecurity legislation. While a strong response from the administration may or may not be forthcoming, lawmakers who had already been working on security bills see the Google-China incident as only the latest in a long line of compelling reasons to act. Sen. Jay Rockefeller of West Virginia, who, along with Maine's Sen. Olympia Snowe has co-sponsored the draft Cybersecurity Act of 2009, has indicated in public statements that he intends to prioritize getting the bill out of committee and under consideration by the full Senate. The bill, introduced as S. 773, includes a broad-ranging set of provisions for standardizing approaches and oversight for security controls, monitoring, vulnerability disclosure, threat assessment, and risk management. The bill as drafted would greatly expand the authority and role of the Department of Commerce, including but not limited to responsibilities for NIST not only to establish and promulgate cybersecurity standards, but also to enforce compliance.

China's not the only country that reads email

The follow letter was written in response to an editorial published in the January 14 edition of the Washington Post:
Regarding the editorial "Google vs. China" in the January 14 edition of the Washington Post, the efforts of the Chinese government to "snoop on the private emails of its citizens," while certainly behavior worthy of being denounced, are fundamentally no different than the rights asserted by our own U.S. government to inspect the content of Internet traffic in the name of national security. The operational scope of the Einstein 3 program managed by the Department of Homeland Security is typically characterized to include both the technical ability to allow email and other Internet communications traffic to be read, and the authority to do so under the USA PATRIOT Act when the content of the communications relates to terrorism or to computer fraud and abuse. There is of course a world of difference between what we would consider "related to terrorism" and the electronic speech of human rights activists who were reported among the victims of the attacks on Google's email service. However, it is not always so easy to draw this distinction, especially when dealing with individuals whose identities exist online. I've little doubt that the Chinese might characterize pro-democracy advocates as potential threats to Chinese national security; the fact that they represent such a threat is one reason the United States objects to their censorship. The point is, the Post is not in the best position to be decrying Chinese state-sponsored snooping into email communications of private citizens, unless it wants to paint the U.S. government with the same brush.

Thursday, January 14, 2010

China, Google, privacy and security

With the widely reported attacks on Google and other companies doing business in China and Google's planned and threatened actions in response, opinions are coming fast and furious on all sides, although official statements from U.S. government officials have been a bit more tentative, at least until more explicit evidence is brought to light revealing the Chinese government's role in the attacks. With all the attention focused on the significance of the attacks and the potential economic ramifications of a possible Google pull-out from China, some other aspects of the whole situation seem to be getting overlooked. Herewith then are a few observations on some of the tangential elements of the story.

Google originally agreed to censor some of its search results in China as a condition of being allowed to operate in the company at all. With Google's Gmail service the reported target of the attacks — specifically the accounts of known Chinese human rights activists — the company characterized the attacks as more than just a security incident. The nature of Google's responses to the attacks has both political and practical drivers. Apparently prompted by the seriousness of the attack, Google now says if it continues to operate in China, it will only do so with uncensored results. It's not entirely clear if the Chinese might be able to put an intermediary filter between Google's servers and Chinese Internet users that would leave users with the same net result, but in any case Google says that if it can't run uncensored, it won't continue in China at all. No argument with the principle here, but it seems a little disingenuous for Google to say that some censorship (and widely suspected state-sponsored hacking) was fine, but now the Chinese have crossed the line, and Google just won't operate on their terms anymore.

Google went public with the attacks for several reasons, but to date hasn't shared a lot of technical details about the nature of the attack or the exploits that might have been attempted or succeeded, other than to declare there were no security breaches of Google itself, so the accounts were most likely compromised through the use of phishing or malware surreptitiously loaded onto client computers. Almost at the same time, Google made a change to Gmail's default security settings and now connections to Gmail are HTTPS by default — a security improvement over the previous approach of letting users enable this option, but having it off by default. Use of webmail without some sort of transport layer security, especially during login, makes compromising an individual email account incredibly easy for an attacker, so while there may be no indication that anyone sniffed one of the victimized account holder's credentials, it would have been a more credible declaration had this setting already been in place.

On the Chinese end, a seemingly landmark development in Chinese individual privacy rights has gone virtually unnoticed outside the legal community, in the form of the Peoples Republic of China Tort Liability Law, passed in late December to go into effect in July. As thoroughly yet succinctly summarized by privacy law experts Hunton & Williams LLP, law includes a statement of a right to privacy, and establishes private rights of action for Chinese citizens to bring tort litigation among Internet service providers, medical institutions, employers, and other parties who mishandle personal information or otherwise infringe on privacy rights. Admittedly, it's easy to dismiss out-of-hand the notion that a single-party socialist regime long marked by suppression of fundamental human rights would recognize and respect personal privacy protections. Given the nature of the attacks on Google, however, it is ironic that this new law would ostensibly offer a legal remedy to the individuals whose accounts were hacked, if in fact the attackers could be accurately identified.

On a more general note, there's a cautionary lesson to be learned here about the perceived and actual security protections afforded to users of online communications services, whether webmail, social networking, or cloud computing services. There is a phrase repeated so often it has become a little maxim in itself: there is no privacy without security. It is also argued that the reverse is true too, especially in cases where "security" is understood to mean "confidentiality." Current discussions about moving into the cloud, for instance, focus first on what security measures can be used to help ensure confidentiality and integrity (and availability too while we're at it) are maintained , but in an environment like China where privacy is not universally championed, focusing on better or more security measures can't solve the problem. The most favorable way to interpret Google's statements and actions about the China situation give the company credit for understanding that.

Wednesday, January 13, 2010

Government emphasis on compliance drives another security acquisition

Enterprise security giant Symantec announced yesterday that it will acquire the privately held vulnerability assessment and security compliance vendor Gideon Technologies. While Gideon focuses on commercial markets such as financial services and health care as well as the public sector, Symantec's press release makes it clear that what it finds most attractive about Gideon's SecureFusion product is its capabilities to scan networks and assess compliance with key federal regulations, including FISMA and Federal Desktop Core Configuration (FDCC) standards, using the Secure Content Automation Protocol (SCAP). Gideon has made support for federal standards compliance a priority, building in a variety of control standards from NIST and even aligning to the Consensus Audit Guidelines (CAG), which are not mandated but which have been embraced by many current and former government IT executives. SecureFusion appears to be a good fit with the rest of Symantec's security management and monitoring toolset, and the combined product offering should appeal to government agencies seeking to establish or enhance situational awareness.

This move by Symantec demonstrates once again the market influence the federal government has, in particular the way the federal emphasis on compliance-based security management continues to drive market opportunities for commercial security vendors. In much the same way as EMC's recent decision to acquire Archer Technologies, the clear and present need for federal agencies to procure and implement tools to assess and monitor compliance in an automated fashion seems to outweigh any potential move away from compliance-based security in favor of effectiveness-based alternatives. In Gideon's case, it's not a coincidence that its core commercial markets are the industries with the broadest and most complex set of regulations. Even with a steady stream of suggestions coming from Capitol Hill that major compliance-mandating regulations like FISMA, HIPAA, Sarbanes-Oxley, and the Privacy Act are in need of substantial revision, it seems safe to infer that Symantec's due diligence and market research on the Gideon acquisition must have left the company confident that regulatory assessment and compliance solutions will remain a lucrative market for the foreseeable future.

Tuesday, January 12, 2010

This year already looks like a big one for evolution on thinking about privacy

Only about a week into 2010 and already there are some very public indications that the current attention on information privacy, especially on the Internet, is likely to result in more visible changes in the way online companies, users, and the U.S. government thinks about privacy. Coincident with a question we posed a few days ago about whether heightened sensitivity about personal information disclosure on Facebook (as well as other sites and forms of social media) would result in changes in user behavior, Facebook CEO Mark Zuckerberg observed in an interview with Michael Arrington of TechCrunch that social norms about privacy and users' comfort level with disclosing and sharing more and more personal information online have shifted dramatically in the relatively short time since Facebook began. (The relevant question and answer are the second of the interview, starting about 2:30 into the video recording.) Zuckerberg used this evolution of social norms in part to justify the significant step recently undertaken by Facebook of changing the privacy policy and default information disclosure practices for all of its 350 million users. This line of explanation might seem disingenuous given the regular disputes over privacy that Facebook has had, both in the U.S. and internationally, but given the enormous popularity and continued growth of Facebook, you have to ascribe the company some credibility for producing products and services that attract a broad user base. It's possible of course that the continued heavy use of Facebook in spite of its history on privacy is an indication less of a societal shift in the desire for privacy protection brought about through the rise of social networking than a simple failure by many users to pay any attention to privacy policies, of Facebook or other online sites.

The disconnect between disclosure of privacy policies and practices and user awareness of those policies despite their conspicuous publication is a fundamental flaw in current rules under which online organizations must give notice to users about privacy policies and related practices such as information collection. The Federal Trade Commission, through its chairman Jon Leibowitz, has gone on the record suggesting that the current model of "advise and consent" — in which companies post their privacy policies and users who visit or conduct transactions online with those companies are considered to have given implied consent — isn't working. Leibowitz and the FTC Bureau of Consumer Protection's David Vladeck say they are looking at alternatives to the privacy policy disclosure practices, with an eye to coming up with options by this summer. One idea sure to get more detailed examination is a shift to an explicit opt-in model, as opposed to the opt-out approach that dominates privacy consent today in the United States. The FTC might want to look to practices in the European Union, which late last year moved to adopt a fully opt-in model on cookies. There is certainly a usability trade-off with strict opt-in requirements, as infrequent users of sites may not be interested in the additional time and effort required to read opt-in notices, and may instead choose not to proceed or answer affirmatively without knowing the terms to which they have agreed. Many e-commerce sites face this same sort of trade-off when determining whether user registration is required to complete an order transaction. In cases where users are willing to register with a site, it's hard to imagine that the extra step of opting in to data collection and usage practices will present too much of a burden, although there's some justified skepticism about how closely anyone will look at privacy policies and terms of use, even with opt-in.

Sunday, January 10, 2010

NHIN begins to look at user-level authentication

During the 2008 trial implementations process and subsequent limited production operation of the Nationwide Health Information Network (NHIN), health information exchange between two participating entities relies on authentication at the entity (that is, organization) level, rather than at the individual user level. For the trial implementations, participating organizations were issued X.509 certificates from a single, centralized certificate authority in a public key infrastructure supporting authentication, basic authorization (there is a presumption than any authenticated request is authorized to receive the information being requested), and non-repudiation of origin. One of the security gaps identified during the trial implementation process was the future need to extend authentication and authorization to individual users, rather than the organizations with which they are affiliated, potentially including hundreds of millions of citizens, should the current administration's vision for widespread adoption of electronic medial records and personal health records come to fruition. There are many technical and functional alternatives available that might be used to provide individual user authentication for health information exchange, but the only consensus seems to be that a solution relying on a single certificate issuer cannot scale to meet the need.

Last week, the NHIN workgroup of the Health IT Policy Committee met to hear testimony from public and private sector representatives on current activities on authentication and identity management, and to begin considering options for user-level authentication with the NHIN. As a federally led initiative, any NHIN authentication model must be consistent with appropriate government standards on electronic authentication, most importantly NIST Special Publication 800-63, which specified a four-level e-authentication framework against which online systems must be assessed. Given the sensitivity of health record data, security evaluations to date have suggested the NHIN falls under E-Authentication Level 3, the requirements for which include strong authentication and lay out specific requirements for identity proofing and subsequent authentication and authorization decisions. Any time the general public is considered part of the potential user base, e-authentication standards become complicated, as it is not uncommon for individuals conduct online transactions infrequently, posing challenges related to credential issuance, maintenance, and retrieval, as well as cost and logistical considerations about software or hardware token distribution. Among the vendors most likely to have answers to these challenges is Anakam, whose two-factor authentication solution leverages existing personal devices such as mobile phones as an alternative to purpose-specific smart cards or other hard tokens, and who was an active participant in the NHIN trial implementation process. Regardless of the technical solutions ultimately chosen, the fact that attention has turned to user authentication for the NHIN is a noteworthy development in itself. There remain a lot of moving pieces relevant to any solution in this area, including in-process revisions to the e-authentication guidance (a topic for another day), so this will be an interesting process to watch as it evolves.

Saturday, January 9, 2010

A few practical ideas for protecting privacy while computing

With all the recent talk about personal information disclosure and the threat of identity theft showing no signs of abating, it's useful to remember that there are a variety of free tools and routine practices that can help limit the amount of personal or potentially personally identifying information you disclose, especially information you may be revealing unintentionally. Protecting privacy in this vein covers two primary areas: computer clean-up and online privacy.

Computing best practices have long recommended regular maintenance of personal computers that includes removing old or unused or fragmented files, and to remove traces of programs that may have been left behind when the programs were deleted, even when using un-installation features included with the programs. These recommendations have largely been justified in terms of optimizing performance, particularly on Windows operating systems, because too much computer clutter can slow operations. More recently, recommendations of this sort have been cast as security and privacy measures, working to reduce the potential for identity theft and to protect users from computer forensic investigation tools. Some of the freely available tools often recommended for these clean-up activities, such as CCleaner and Eraser, fill one niche need for people looking to dispose of or donate old computers. The increasing frequency with which forensic scanning tools are used has provided another use case for these tools.

The most recent versions of Mozilla Firefox (since v3.5) and Internet Explorer (since v8.0) make it pretty easy to keep evidence of online behavior off client computers, essentially preventing the local storage of much of the information that a utility like CCleaner looks to remove. Removing traces from a computer is a much simpler matter than preventing the disclosure of potentially personally identifiable information, such as IP addresses, when users go online. In this arena most attention is focused on the use of web browsing proxies, which effectively enable anonymous browsing; plug-ins for Firefox and Internet Explorer and Safari are available to add anonymous browsing functionality (generally via proxy) within the browser itself. There are many reasons users seek anonymity while browsing, but the justification for masking identity when surfing online has been strengthened by the increasingly frequent use of online behavior tracking, notably including storage and retention of browsing and search query history by major search vendors such as Google. Partly in response to this trend, anonymous search engines such as StartPage offer private Internet searching, promising specifically that no user IP addresses are logged.

While it is certainly helpful that so many tools and services are available to help maintain digital and online privacy, the overall message remains that the onus is on the user to take steps to limit disclosure of personal information.

Friday, January 8, 2010

Information sharing actions in the name of national security test international privacy laws

The Secure Flight program recently implemented under the authority of the Transportation Security Administration (TSA) is raising a number of privacy issues not just in the United States, but also in foreign countries whose privacy laws may run counter to the information sharing required by the program. Secure Flight requires air carriers to collect a variety of personal information about passengers in advance of travel, in order to facilitate the comparison of ticketed passengers to terror watch lists such as the no-fly list. It is intended both to reduce the number of false positives (that is, individuals mis-identified as being on a watch list, due to factors such as name similarities) and to improve the efficiency of the matching process, which ostensibly will help avoid false negatives such as the recent high-profile incident on Christmas Day in which a known person of interest was permitted to board a U.S.-bound Northwest Airlines flight and attempt to carry out an act of terrorism. The program pre-dates this latest incident by several months, and while no one has yet suggested that the Secure Flight program would have prevented the incident, the program is receiving a lot of attention due to the timeliness of its rollout.

One consequence of the Secure Flight program is the requirement for foreign air carriers to share passenger list data with the United States (currently this applies to flights landing in or taking off from the U.S., but is intended to include flights entering U.S. airspace, whether or not they have a termination point here). Carriers based in other countries have complained that sharing personal passenger information with the U.S. may be prohibited by non-U.S. national data privacy laws. For instance, while overflights from Canadian and Mexican points of termination are not currently subject to Secure Flight, a Canadian air carrier association is arguing that providing the data required under Secure Flight violates the Personal Information Protection and Electronic Documents Act (PIPEDA). This conflict between U.S. national security intentions and international privacy laws is not new; a similar program initiated in 2004 for sharing passenger name records between European Union countries and the U.S. required extensive negotiations in order to settle on a set of data elements acceptable to the European Union and its data protection provisions and extend certain provisions of the U.S. Privacy Act (which explicitly applies only to U.S. citizens and permanent resident aliens) to non-U.S. passenger name record data. The specifics of personal data protection laws vary greatly among different countries, but in the case of those in the European Union, under OECD privacy guidelines for transborder flows of personal data and the 1995 Data Protection Directive (95/46/EC), countries are only allowed to send personal data to other countries with comparable data protection laws. With passenger name records, legal arguments continued for several years until a compromised was reached in 2007, but this agreement only covers personal data in passenger name records; sharing of personal data more broadly with the U.S. remains legally problematic for organizations in many foreign countries.

Continued focus on compliance rather than effectiveness is driving the market

In a story widely reported last Monday, enterprise software giant EMC Corp. announced its pending acquisition of the private company Archer Technologies, a vendor of IT governance and compliance solutions. EMC plans to make Archer part of its Security division, which itself was primarily created through EMC's acquisition of RSA in 2006. Among the most compelling aspects of this story is a statement by Art Coviello, president of EMC's security division (RSA), who explained RSA's market perspective as follows:
"Traditional security management focuses primarily on addressing technology issues, but our customers are telling us that their real challenges are in the areas of policy management, audit and compliance. You can't manage what you can't see. The Archer solution not only offers the visibility into risk and compliance that customers need, it brings stronger policy management capabilities to the RSA portfolio. The end result is customers are able to better manage their security programs and prove compliance across both physical and virtual infrastructures, and effectively communicate to the business."(emphasis added)
So to take the word of a leading security vendor, what customers say they need is help with compliance. To call this unfortunate greatly understates the issue, but it seems that the consistent emphasis of legal and regulatory schemes on security compliance — rather than effectiveness — is driving the market in a direction exactly opposite of where it should be going. While both government and commercial sector security approaches have been slow to realize the deficiencies of compliance-based security, more and more emphasis is starting to be (correctly) placed on continuous monitoring and event correlation, often in the name of achieving greater levels of situational awareness. In light of these trends, it is disheartening if not surprising to hear that those obligated to follow compliance-based security approaches apparently now prioritize demonstrating compliance and passing audits over enhancing security. Let's be crystal clear, being in compliance with a security scheme that doesn't measure overall security posture or security control effectiveness tells you nothing about how secure you are. Unless and until the regulatory requirements are revised towards controlling risk, mitigating threats, and testing security effectiveness, security programs are hung out to dry, with compliance having the greatest business visibility (at least until a major breach, outage, or other security incident occurs). Security managers have a hard enough time justifying security investment in economic terms; as long as compliance is the most tangible goal then compliance approaches will continue to take precedence over less emphasized but more significant efforts to actually improve operational security.

Will continuing concerns over Facebook privacy change user behavior?

The changes Facebook made to its privacy practices and, in particular, default settings and additional personal information about Facebook users that is made public, continue to draw a lot of attention, and not in a positive way. Among the latest news item is an article from Wired that describes how some technology-savvy web marketers are taking advantage of the new Facebook default privacy settings to harvest information about Facebook users and people in their online friend networks. It's not entirely clear how many of Facebook's hundreds of millions of registered users have taken action to change their privacy settings since the new practices went into effect a month ago, but anyone who has not is exposing the majority of their profile information not just to users on their own friends lists, but to all the friends of their friends. Even for a user who takes action to restrict the visibility of their information from methods such as web searches, if an outsider know the user's email address, that is enough to get to core profile information that Facebook now treats as public.

Concern among Facebook users has apparently also resulting is spike in activity to delete Facebook accounts, including through the use of third-party services like Seppukoo and Suicide Machine. Use of these services to remove Facebook accounts has reached sufficient levels to prompt Facebook to start trying to block access from these services (primarily through blocking IP addresses), although Facebook also reportedly sent a cease-and-desist letter to the creators of Seppukoo.com, claiming that the third-party access to Facebook from Seppukoo violates Facebook's terms of service and may be prohibited by various computer use and intellectual property laws.

Stepping behind a sociological lens for a moment, what may be more interesting than the debate between Facebook, its users, and privacy advocates may be the extent to which the heightened attention on user privacy will actually result in a shift in behavior among users. An academic research in the U.K. featured by the BBC this week argues that the decision by social networking users to publish more and more of their personal information online effectively reduces privacy for everyone, in part by diminishing expectations of privacy. The idea here is that from a societal perspective privacy norms are just that, norms, rather that the most or least restrictive interpretations, so when a greater proportion of people opt for looser interpretations of privacy, the societal norm shifts in that direction. This fairly straightforward idea touches on one of the hardest aspects associated with managing trust (online or otherwise), since there are few hard and fast rules about what does and doesn't constitute trustworthiness. Instead, trust from personal or organizational perspectives is highly subjective, making the establishment and maintenance acceptable levels of trust an elusive goal.

Tuesday, January 5, 2010

HHS plans to test re-identification of "de-identified" health data

In a special notice posted yesterday on FedBizOpps, the HHS Office of the National Coordinator for Health IT is getting ready to put a contract out to fund research on re-identifying datasets that have been de-identified according to HIPAA Privacy Rule Standards. As noted previously in this space, academic researchers working at Carnegie-Mellon and at UT-Austin have already reported on efforts to successfully identify records ostensibly anonymized, although to be fair neither of these specific research examples were based on HIPAA de-identified data. What's most intriguing about this solicitation notice is that ONC has one of the leading experts on the subject, Latanya Sweeney, sitting on its Health IT Policy Committee. Sweeney's doctoral research included work with anonymized medical records which, she discovered, could be positively identified a majority of the time simply by correlating the medial records with other publicly available data sources that included the demographic information stripped out of the health data. The research to be funded by ONC will focus on data that has been de-identified according to current HIPAA standards, which basically require the removal of 18 specific identifiers and any other information in a health record that might otherwise uniquely identify the individual in question. Specifically:
  1. Names.
  2. All geographic subdivisions smaller than a state, including street address, city, county, precinct, ZIP Code, and their equivalent geographical codes, except for the initial three digits of a ZIP Code if, according to the current publicly available data from the Bureau of the Census: (1) The geographic unit formed by combining all ZIP Codes with the same three initial digits contains more than 20,000 people; and (2) The initial three digits of a ZIP Code for all such geographic units containing 20,000 or fewer people are changed to 000.
  3. All elements of dates (except year) for dates directly related to an individual, including birth date, admission date, discharge date, date of death; and all ages over 89 and all elements of dates (including year) indicative of such age, except that such ages and elements may be aggregated into a single category of age 90 or older.
  4. Telephone numbers.
  5. Facsimile numbers.
  6. Electronic mail addresses.
  7. Social security numbers.
  8. Medical record numbers.
  9. Health plan beneficiary numbers.
  10. Account numbers.
  11. Certificate/license numbers.
  12. Vehicle identifiers and serial numbers, including license plate numbers.
  13. Device identifiers and serial numbers.
  14. Web universal resource locators (URLs).
  15. Internet protocol (IP) address numbers.
  16. Biometric identifiers, including fingerprints and voiceprints.
  17. Full-face photographic images and any comparable images.
  18. Any other unique identifying number, characteristic, or code, unless otherwise permitted by the Privacy Rule for re-identification.
It's not clear at this point what the outcome of such research might be, assuming some level of "success" in re-identifying health data. One mitigation might be an expansion of the list of fields that need to be removed to effectively de-identify someone. A more significant response might be an acknowledgment that true anonymization of health data to the degree sought (and one could argue assumed) under current law and policy simply isn't possible without more extensive alteration of the original data.

Friday, January 1, 2010

Looking ahead for 2010

We launched this blog a year ago today, as an adjunct to our SecurityArchitecture.com website. It took us a few months to hit our stride, but in the past few months we've become not only more consistent in getting our observations and opinions posted, but also identified some key security and privacy topics to keep track of, and established a few recurring themes as well. Most of these were not just timely during 2009, but are likely to continue to be areas of interest in the coming year and beyond, so if you return to this space during 2010 here are some of the things you're likely to see.
  • Continued attention and increasing pressure on the U.S. government to commit more resources to cybersecurity and, possibly, consolidation of information security oversight and budgetary authority within the executive branch.

  • More emphasis on securing data at rest, in transit, and in use, with relatively less emphasis on system and network security as environment boundaries become less and less well defined due to increased levels of information exchange, inter-organization integration and cooperation, and use of hosted services like cloud computing.

  • Movement in the direction of proactive security, instead of the reactive posture that dominates security programs in both private and public sector organizations today. With any luck this will manifest itself in less security-by-compliance and more testing and validation that implemented security measures are effective.

  • Without diminishing the importance of guarding against insider threats, a resurgence in intrusion detection and prevention, in conjunction with efforts to achieve greater situational awareness to combat increasingly sophisticated and persistent threat sources.

  • A steady stream of breaches and other incidents to highlight the importance of backing up appropriate security and privacy policies with the means to enforce them.

  • Creative approaches and new solutions proposed to address trust among connected entities, including areas such as claims-based identity management, federated identity approaches, stronger identification, authentication, and authorization assertion models, and means to negotiate, establish, maintain, and revoke trust among different entities with widely varying trust requirements in terms of regulations, standards, and risk tolerances.