On February 11, aerospace giant Boeing, leading a team including Northrop Grumman and Lockheed Martin working for the U.S. Missile Defense Agency, successfully completed the first air-to-air demonstration of the Airborne Laser Testbed (ALTB) by destroying a missile in flight. As seen in live video, the ALTB combined a high-powered chemical laser with sophisticated optics and advanced targeting and tracking systems all carried in a specially modified 747, resulting in the ability to lock on to a target liquid-propelled missile in its boost phase and hit the target missile with a laser powerful enough to incinerate it. As remarkable as this technical achievement may be on its own, it is also a validation of a concept envisioned more than 30 years ago when the first chemical oxygen iodine laser was invented in 1977. Those of us of a certain generation may quickly recall similarities between this real-world demonstration and the plotline of the 1985 movie Real Genius, a comedy starring Val Kilmer that told the story of a group of college students at a fictional high-tech institution who despite various distractions manage to build a multi-megawatt chemical laser of exactly the same type used in the ALTB. In the movie, the students are unwittingly furnishing all the components of an airborne laser ostensibly desired by the military to allow the vaporization of virtually any target from space. Such a chemical laser has also been envisioned for possible use in space-based missile defense systems. In the film the aircraft carrying the laser is a B-1 bomber instead of a 747, but the rest of the details are remarkably similar to the actual ALTB system. We note with some irony that in a previous test of the system a ground-based target was successfully destroyed from the ALTB in flight — a scenario virtually identical to the demonstration that is planned in the movie for the laser the students have built. In retrospect, it appears the producers of the file should get some extra credit for the thoroughness of their research into the science portrayed in the film.
It's not everyday that you hear Facebook's most recent changes to its privacy practices referred to in strongly positive terms (at least by people who don't work for Facebook), but some leading advocates of more fine-grained control over privacy in the health information context point to Facebook as an example showing, in the words of World Privacy Forum founder Pam Dixon, "that we can in fact have granular control over sensitive data." One of the key aspects of health information privacy that remains under-addressed to date is the capture and adherence to consumer preferences about the use and disclosure of their personal health information. Beyond the debates about exact what uses of the data should require proactive consent from individuals, there has been concern over the functional and practical aspects of managing many different "consents" corresponding to different uses and context and scenarios. Deborah Peel, a doctor and founder of the non-profit Patient Privacy Rights, characterizes Facebook as a "kind of consent management system," albeit one with controls that could stand improvement and that may not be fully suitable to handle the complexity (or robust identification and authentication requirements) involved in consent management for health information.
During yesterday's hearing of the Senate Committee on Commerce, Science, and Transportation, a panel of security experts urged the government to do more to push public and private sector action on critical infrastructure protection and Internet security, although those testifying differ on the exact role the federal government should play in encouraging that action. James Lewis of the Center for Strategic and International Studies repeated before the Committee his argument that federal regulation is needed to achieve the levels of participation sought among private sector organizations. His testimony included an analogy — familiar to anyone who has heard Lewis speak publicly in recent months — likening the need the government to regulate better cybersecurity to the historical regulatory action to promote safety in the automobile industry, and arguing that it is no longer feasible to rely on voluntary adoption of best practices and market forces. Michael McConnell, former director of national intelligence and currently with government contractor Booz Allen Hamilton, expressed similar views and concluded that private industry could no longer credible advocate a hands-off role for government. Other panelists were more circumspect in their choice of words and recommendations, such as Oracle Chief Security Officer Mary Ann Davidson, whose prepared remarks largely expressed support for things that Congress is actively doing, such as increasing funding for education in information security skills and trying to design software and technology products that are built to be more secure. Admiral James Arden Barnett, Jr., the Director of the Public Safety and Homeland Security Bureau of the Federal Communications Commission (FCC) also sees a role for government intervention, and focused his remarks on the possible role the FCC can play in critical infrastructure protection, including serving as the point of information on network outages and related issues collected from broadband service carriers. Scott Borg, head of the U.S. Cyber Consequences Unit (a non-profit research institute), while noting the problem of market failures resulting in under-addressed aspects of cybersecurity, warned that the technical landscape changes so quickly that there is no practical way for the government to keep up if it tries to impose standards. None of the positions expressed were inconsistent with the emphasis on strong public-private partnerships to advance cybersecurity advocated by Committee Chairman Sen. Jay Rockefeller, who with co-sponsor Sen. Olympia Snowe drafted a piece of legislation titled the Cybersecurity Act of 2009 (S.773) that would codify and strengthen federal oversight roles on security, including elevating the federal cybersecurity czar to a Cabinet-level position. There were few voices at this hearing representing arguments by industry that financial incentives are a better alternative to regulation, or concerns raised by privacy advocates and free market proponents. The pervasive theme in yesterday's hearing was the sense of urgency for the government to act, due to the ongoing threat environment and the potential for a serious attack attempt against U.S. critical infrastructure.
Government observers are well aware that there is a big difference between passing a provision in a piece of legislation, crafting the rules that implement the provision, and then putting those rules into effect. Where new requirements or regulatory responsibilities are placed on organizations, it is also fairly common for the effective dates of new rules to be delayed if it's clear the entities subject to the regulation aren't ready to comply. Familar examples of such delays and compliance deadline extensions include those for small businesses subject to Sarbanes-Oxley and, more recently, the multiple delays in the deadline for personal information protection requirements in Massachusetts' 201 CMR 17. With these precedents, it is perhaps unsurprising that personnel from the HHS Office of Civil Rights (OCR) have indicated that OCR will not begin enforcing new security and privacy requirements in the HITECH Act that apply to business associates. With these rules — essentially a set of strengthened HIPAA privacy and security requirements that apply to a broader set of health industry participants and organizations — it seems the delay is warranted not just for the apparent lack of readiness of the organizations covered by the rules, but also by OCR's uncertainty regarding the most reasonable and consistent approach to take on HIPAA enforcement. HITECH reset many of the standards and expectations for monitoring and auditing compliance, and for investigating violations.
On a tangential note, this situation highlights the difficulty with following implementation timelines dictated in legislation, often without any extensive consideration of the feasibility of meeting the timelines. So far, HHS has done a pretty good job of issuing regulations and promulgating standards (at least in draft form) on or ahead of the schedule contained in the HITECH Act. The timing of the announcement last week that Judy Pritts had joined ONC as its Chief Privacy Officer was also dictated by HITECH (the law says the appointment had to be made "not later than 12 months after the date of enactment"). It should be noted that the rules under HITECH are officially in effect, so the only delay is in their enforcement. To some this might seem a trivial distinction, but historically HIPAA enforcement has relied a great deal on voluntary monitoring, so the fact that business associates shouldn't expect an auditor visit right away shouldn't divert the attention from these organizations on putting the appropriate processes, practices, and technologies in place to comply with the law.
With a lawsuit filed in federal court last week , school officials in Lower Merion, Pennsylvania are on the defensive over the alleged illegal use of remotely activated webcams in laptop computers issued to students. It seems the Macbooks include software that allows administrators to turn on the webcam to try to help recover a laptop should it become lost or stolen; the security feature has been used several dozen times in such situations, apparently without raising any objections from students or their parents. In the case that prompted the laptop, however, a Harriton High School student was accused of engaging in "improper behavior" after school administrators recorded and viewed images of the student putting small object in his mouth — the school said they were drugs; the student says they were candy. Despite using the photographic "evidence" to support its claim against the student, the school district maintains that it would never use the remote webcam activation for any purpose other than recovery of a lost or stolen laptop. The Lower Merion district superintendent went so far as to claim, "The district has not used the tracking feature or webcam for any other purpose or in any other manner whatsoever." He did not address how a Harriton assistant principal came to be in possession of images from the accused student's laptop webcam, since there was no suspicion that the laptop was missing. There doesn't appear to be any claim of probable cause (not that a school official is legally justified in determining probably cause) with respect to the student's alleged behavior, but instead the claim is based on visual observations made using the webcam.
The most thorough (the term "thorough" doesn't quite do justice to it) accounting of the technical tools involved and the actions and opinions of school network technicians comes from Intrepidus consultants Stryde Hax and Aaron Rhodes in a lengthy blog post.
With the attention now focused on the situation, it is becoming clear that while the alleged practice of remotely monitoring students in their own homes violates a number of federal laws, the school district appears to have acted inappropriately from the outset by not informing students or parents that the webcams in the laptops could be activated remotely. Even if it had provided notification and obtained consent for the explicit purpose of remote activation to aid in recovery of lost or stolen computers, the apparent use of the webcam for routine monitoring would be illegal. Many state and federal laws covering monitoring of employee behavior in the workplace such as the Electronic Communications Privacy Act require notification and consent prior to monitoring, so the fact that this monitoring took place in private homes and that minors were surveilled adds a host of other legal and regulatory protections that the school district appears to have ignored. In addition to ECPA, the lawsuit claims violations of federal laws including the Computer Fraud and Abuse Act, the Stored Communications Act, a section of the Civil Rights Act; the Pennsylvania Wiretapping and Electronic Surveillance Act and Pennsylvania common law; and the Fourth Amendment.
HHS announced on Wednesday that Joy Pritts has been named the Chief Privacy Officer for the Office of the National Coordinator for Health Information Technology. Pritts, a lawyer and Georgetown University professor specializing in health law and policy, has done research focused on privacy of health information, including issues related to patient access and consent and healthcare organizational responsibilities for protecting data contained in medical records. Her appointment to a position required to be filled by this week under a provision contained in the HITECH Act (P.L. 111-5 §3001(e)) comes at an opportune time, given the need for ONC to provide more explicit guidance on how healthcare organizations and other entities addressed in the HITECH Act can adopt appropriate practices to be able to follow fair information practices and legal obligations for personal health information. Recent discussions among members of the Health IT Policy Committee workgroups considering the new meaningful use rules, measures, and criteria have highlighted the absence of any specific criteria for privacy. This leaves healthcare organizations in essentially the same place they were before — required to comply with HIPAA Privacy Rule requirements and other relevant privacy laws, but without any new or specific obligations to ensure that patient preferences on disclosure and consent for use are captured, maintained, and honored.
Most of the major information sharing initiatives under development today are designed with integration patterns that assume that most data will be accessed from the authoritative systems or organizations where it resides, rather than copied to some sort of centralized data repository. Both federated and distributed integration models have the benefit of leaving data owners in charge of their own data and able to control (through authentication and authorization methods) what information is shared with other organizations or what requests for information receive a response. Also, without a central operational data store, there is less need to establish, manage, and oversee infrastructure and services to support information exchanges using these patterns. For this and other reasons, high-profile information sharing initiatives such as the Nationwide Health Information Network (NHIN) are working to implement appropriate technical and policy measures to ensure the security of health information exchanges between authenticated participants using the Internet, but these security measures are entirely focused on protecting confidentiality (including safeguarding privacy) and data integrity. In an operational vision where health care is supported by real-time requests for patient record data potentially stored in many disparate systems, ensuring the accuracy and completeness of the information necessitates paying attention to matters of availability as well. A lot of attention in the health IT community recently has focused on health care organizational security practices such as risk assessments — required under the HIPAA security rule and specified as a measure of "meaningful use" for health care providers seeking EHR incentives available through the provisions of the HITECH Act — and the perhaps surprising proportion of covered entities that do not conduct such assessments on a regular basis. Similarly, as the HIPAA security and privacy requirements strengthened in the HITECH Act took effect this week, many healthcare organizations remain insufficiently prepared to comply with the requirements. A recently released report from IT analyst firm Forrester Research on server availability highlights the commonplace occurrence of system outages among healthcare organizations and points to the corresponding absence of reliably high availability of these systems as a key vulnerability for successful use of health IT. It is logical bordering on obvious that any integrated system for information exchange and retrieval that accesses data from its source is only reliable if all the sources are available to respond when queried. This inherent weakness in a distributed integration model is only exacerbated in the case of health information exchange using the NHIN because the core network infrastructure is the public Internet. Forrester concludes that cost is the primary barrier to providing higher availability health IT systems, so it is further indication of the lack of attention focused on this element of the "CIA triad" that forms the core of contemporary information security that there is nothing about EHR system availability (in the sense of system uptime and accessibility) in the meaningful use measures and criteria developed for the EHR incentive program.
Responses to a simulation yesterday of a large-scale cyber attack that supposed a widespread malware infection has shut down cell phone and computer networks and disabled much of the power grid showed a lack of preparedness to handle such a major incident, as well as potential gaps in policies, legal authority, and technical skills. In the words of former CIA Director Michael Hayden, who helped devise the simulation, "It was clear we don't have an adequate policy, expectation of privacy, public-private partnerships or understanding of international norms to deal with a massive cyber attack." More details about the "Cyber Shock Wave" scenario and the participants in this exercise can be found at the website of the Bipartisan Policy Center, which developed and sponsored the simulation.This particular exercise was notable both for the individuals who participated and the publicity of the process, both of which increase the likelihood that current administration and military officials with actual responsibility for handling such an attack will take notice of the results.
As the Department of Defense continues its efforts to improve security provisions and practices for handling its information — especially with respect to sensitive but unclassified data — it is expanding its focus beyond its own networks and Internet connected environments to address security policies and standards for the vendors and other third parties that store or transmit military information. The specific policies and expectations for members of the "Defense Industrial Base," as such third parties are collectively called, were publicized in Memorandum 52015.13, issued on January 29. The memo spells out specific activities and areas of policy or procedural guidance that the DoD intends to implement, and assigns oversight responsibilities for these activities to specific roles within the DoD management hierarchy. The simple intention appears to be to ensure that potential threats are not able to use the systems or infrastructure of the DoD's information supply chain to gain access to military information.The release of the memo should put DoD vendors on notice that they may need to create, revise, or expand existing policies and capabilities to meet DoD's expectations, and also suggests that additional guidance will be provided in terms of recommended policies, controls, or best practices that vendors and partners can put in place.
Now that the House nearly unanimously passed its Cybersecurity Enhancement Act earlier this month, some attention has turned to the plethora of similar draft legislation in the Senate with speculation over which of the bills is most likely to move forward. The apparent lack of a leader among the legislation or their sponsors prompted a comment from former administration cybersecurity adviser Melissa Hathaway that the Senate needs to consolidate some or all of the current bills into one that the Senate can get behind and take action on. Others seem to think that it's not important which bill moves forward, as long as one of them does, because that will give the sponsors or champions of other bills the opportunity to augment or reshape or otherwise optimize any proposed bill through the amendment process. This sounds like two different ways to look at the situation that arrive at the same net conclusion, which is that as long as there are multiple competing agendas (even if they are focused on the same sorts of outcomes), there won't be much progress.
A recurring challenge facing efforts to implement interoperable health information exchange solutions is agreeing on a common set of security standards that can be applied to both private and public sector participants in such exchanges. There are multiple alternatives from which to choose, notably including the HIPAA security rule, ISO/IEC 27002, and NIST SP800-53 security controls used in association with FISMA, but none of these apply — either by regulation or by choice — to all the different types of organizational entities sought for participant in health information exchange. The federal government, through the Office of the National Coordinator (ONC) for Health IT within the Department of Health and Human Services, has taken on the role of setting policy, providing program funding and financial incentives for health IT technology adoption, and establishing the criteria organizations must meet to qualify for these incentives. ONC has also formed advisory committees with representation from government, commercial, and non-profit organizations to determine the most appropriate overarching policies and standards to be used for health information exchange. For several years the government has been leading major initiatives intended to help realize the vision of a nationwide information exchange infrastructure, and with the passage of the HITECH Act in February 2009, the government also took on a role as the arbiter of technical standards, including those for security. In a recent webcast sponsored by 1105 Government Information Group, speakers from the government, contractor, and IT analyst communities gave presentations on security as both a key prerequisite and important enabler of health information exchange, and highlighted work being done today by the Veterans Administration that may serve as a model for recommended security standards for electronic health records. Even the limited experiences with health information exchanges between government agencies and private sector organizations demonstrate the enormous complexity involved with complying with all applicable security and privacy regulations. Nevertheless, getting security right is absolutely necessary in order to achieve widespread use of health IT and participation in health information exchanges.
For the VA's part, director of Health Care Security Gail Belles emphasized the need for a common set of security standards that can be applied to both public and private sector entities, but also highlighted the lack of consistent standards even among federal agencies for handling data exchanges with non-federal entities. During the webcast Belles summarized a Veterans Health Administration pilot patient record sharing project with Kaiser Permanente in San Diego, using the specifications and standards of the Nationwide Health Information Network (NHIN). For the pilot project, both VHA and Kaiser Permanente signed a legal agreement laying out terms, prerequisites, and obligations for data exchange between the two organizations, and then proceeded in accordance with the regulatory security requirements that apply to each organization — in Kaiser Permanente's case, that includes HIPAA and HITECH as well as California laws such as the SB 1386 governing privacy of personal information; in addition to HIPAA and HITECH the VA is subject to FISMA, the Privacy Act, and provisions under Title 38 of the U.S. Code covering privacy and confidentiality of veterans' medical records and claims data. Even without beginning to dive into the specifics, the picture would be greatly simplified if a single comprehensive set of security and privacy standards were available. Because it has such a large presence in delivering and administering health care, the government alone is in the position to declare standards that will be adopted by federal and non-federal participants alike. The government is already working on a standard definition for the structure of an electronic health record, so it does not seem unreasonable that the government would also take a shot at formalizing the standards required to secure those records when they are exchanged.
In a story first reported by the Roanoke Times and picked up by the Washington Post in today's edition, Virginia Tech's student newspaper is at odds with the University Commission on Student Affairs over its practice of allowing anonymous comments to be posted on its website. The commission — an advisory body comprising students, faculty, and staff — has recommended that unless the paper changes its policy regarding online comments, the school's administration should withdraw the roughly $70,000 in funding the Collegiate Times receives through its parent organization, the Educational Media Company at Virginia Tech (EMCVT). As an organization independent from Virginia Tech, EMCTV and the newspaper do rely on school funding alone, but the commission has also suggested it might ban student organizations on campus from buying advertising in the paper, and that loss of revenue would threaten the paper's survival. The disagreement has raised a variety of policy and legal issues, notably including constitutionality claims under the First Amendment, which on balance seem to suggest that the paper is on defensible ground, but that the school can likely get its way.
While some have raised issues about the inconsistency of the online posting policy itself (the paper's editors do not accept anonymous letters to the editor, for instance, but do allow anonymous comments on the website), given the educational setting of the case, the core issues boil down to the ability of the school administration to control speech associated with the paper and the legal validity of Virginia Tech's Principles of Community, which some anonymous contents posted in the past allegedly violate. Legal precedents established over the last 20 years or so generally side with educational administrators on the ability to censor some kinds of speech in any school-sponsored endeavor, not just publications, and also have found university codes of speech and even anti-harassment policies to be unconstitutional.
Prior to 1988, the most relevant legal standard in First Amendment issues in educational settings was Tinker v. Des Moines Independent School District (393 U.S. 503 (1969)), in which the Supreme Court ruled that student expression was speech protected under the First Amendment, and was generally applied to mean that school administrators could not prevent student speech on the basis of its content. In 1988 however, the Court more or less made a complete reversal in Hazelwood School Dist. v. Kuhlmeier (484 U.S. 260 (1988)), ruling that school administrators could in fact censor a school-sponsored newspaper. The role of the school as sponsor or publisher is important in Hazelwood, because the Court drew a distinction between "activities that students, parents, and members of the public might reasonably perceive to bear the imprimatur of the school" and those that are independent from it. Despite the formal ownership and funding structure of the Collegiate Times, it seems hard for the paper to argue that it so independent from Virginia Tech that content produced by the publication would not be associated with the school. On this issue, Virginia Tech has the law on its side.
However, before the commission or the school declare victory in this matter they might want to firm up the basis of their objections to the paper's policy. By objecting to the use of anonymous posting to make comments that run counter to the Principles of Community, the commission puts the principles themselves at the heart of the dispute, and campus speech codes and other policies similar to Virginia Tech's Principles of Community have repeatedly been found unconstitutional when challenged in court. As objectionable as the idea sounds, the administration might be on firmer ground if it followed through on its threat to prohibit student organizations from advertising in the paper.
Interesting post from GovInfoSecurity.com's Eric Chabrow a couple of days ago, in which he borrows some conclusions from a Frontline documentary on the airline industry called "Flying Cheap" and applies them to the current debate about the best way to get critical infrastructure providers — especially those in the private sector — to implement and follow better security practices. Broadly speaking, there are two methods the government could use to effect changes in cybersecurity approaches: regulate or incentivize. A possible third option is closer collaboration between public and private sector organizations, but partnerships of that sort tend to fall into the "incentive" category, even if the incentives offered aren't monetary.
The path of cybersecurity regulation has precedents in both the government (FISMA) and the private sector (HIPAA, GLBA, Sarbanes-Oxley) but regulations in force are applied narrowly by industry and do not at present address most critical infrastructure providers, whether in telecommunications networks, SCADA, or public works. Even with well-defined applicability, legislating security requirements often gets bogged down in the details, resulting in rules that say what you should do, but not how to do it effectively. This doesn't mean that the government isn't working on new and revised security regulations — there are in fact multiple concurrent and sometimes overlapping legislative efforts pending in Congress — but if history is any guide, these will not be sufficiently explicit or detailed to raise the bar across the board. The alternative approach of providing incentives to companies to improve their security has more proponents in industry than in government, although the Cyberspace Policy Review commissioned by the Obama administration and released in May 2009 tends to favor incentives over mandates. No advocate of an incentive-based approach has been more visible or vocal than the Internet Security Alliance's CEO Larry Clinton, who has been pushing this point since at least the 2008 presidential election.
The lesson learned from the airline industry and its legally mandated safety regulations is that complying with regulations, even when it's in the best interest of customers, costs money and has an impact on the corporate bottom line. For organizations that may have their priorities arranged more for business drivers than for achieving the outcomes sought by regulation, some consideration ought to be given to positive compliance incentives (and not just potential penalties for non-compliance). The administration has its own example to follow in the Recovery Act and follow-on funding devoted to providing financial incentives for adoption of health information technology; the motivation switches from incentive to penalty after 2015, but the emphasis in making the new technology pervasive is positive incentives.
With the rapidly approaching March 1 deadline when Massachusetts' new personal data protection law (201 CMR 17) finally goes into effect, one of many requirements facing organizations covered by the law is the need to encrypt all records or files containing personal information while the data is in transit across public networks or via wireless transmission, as well as information stored on laptops or other portable devices. The requirements notably stop short of requiring encryption of all personal data at rest (for data on Internet-connected systems, the law requires up to date patches and firewall protection), although the definition of "breach of security" in the regulation applies only to disclosure of unencrypted data, or encrypted data along with the means to decrypt it.
Much like the exception to data breach notification rules for personal health information that took effect last September, organizations who choose to use comprehensive encryption for personally identifiable information stored or in transit give themselves one less thing to worry about. It remains to be seen if this provides sufficient incentive for encryption of data at rest to become more pervasive. To the extent that organizations publicize their experiences complying with the Massachusetts regulations, achieving compliance with 201 CMR 17 may provide a useful data point for organizations in the health arena. Healthcare entities face stronger data privacy and security requirements from the HITECH Act's effect on existing HIPAA rules, and also have to plan ahead for the security requirements contained among the "meaningful use" criteria that will be used to determine eligibility for federal health IT incentives to organizations adopting electronic health records and associated systems. These criteria include the ability to encrypt and decrypt data both in storage and in transit, so once again may provide another reason for these entities to start encrypting their data. Under the law (HITECH and HIPAA), organizations are not specifically required to encrypt personal data, so any risk-based decision to do so will likely be centered on the potential impact to the organization of a data breach. Given Health Net's experience and pending legal action, the decision by healthcare entities to continue to leave personal health data unencrypted seems to make less and less sense all the time.
In a post earlier this week, we noted that generally speaking, anyone in the U.S. wanting to take legal action over the privacy practices of social networking sites like Facebook would have to do so within the boundaries of the Federal Trade Commission Act's rules on unfair and deceptive trade practices. Two cases, both seeking class-action status, have now been filed in federal district court in California, alleging that Facebook's recent changes are deceptive. It's not at all clear that these charges have merit, especially in light of the explicit language Facebook has long has posted within its terms of service that basically reserve the right to make any changes it wants as long as it notifies users. Facebook users who don't agree with that policy or the scope of changes that the company seems to like to make under cover of the policy certainly have the option not to use Facebook. The point is that the legal avenues to go after Facebook are somewhat limited, so the apparent approach being followed in these lawsuits is logical from that standpoint. More troubling for the plaintiffs may be the charge that Facebook's privacy settings are too detailed and spread out, making them confusing.
These formal legal actions come on the heels of prior complaints filed with the FTC by EPIC and other consumer and privacy advocates about Facebook and its new privacy settings. Based on initial responses to EPIC by FTC Bureau of Consumer Protection chief David Vladeck, the complaints are at least getting attention from the feds, even if no formal investigation has been initiated.
Based on details about IT security spending in the publicly available fiscal 2011 budget estimates for the Defense Information Systems Agency (DISA), one area of emphasis for improving cybersecurity for military networks is to reduce their connectivity to the Internet. More specifically, the justification for the $14.625 million DISA Information Systems Security Program (ISSP) budget is to "procure the necessary hardware and software to reduce the attack surface of the DoD network to prevent the exploitation by hackers and adversaries" as well as to improve capabilities and security of information sharing within Defense networks. One notable initiative is the almost $6 million proposed to fund the creation of a new DMZ between the military's unclassified network (the NIPRNet) and the Internet. In theory, the goal of reducing points of connectivity to the Internet should also be facilitated by the government-wide Trusted Internet Connections (TIC) initiative, which seeks to reduce federal Internet points of presence to from over 2750 in 2008 to fewer than 100. Nevertheless, the stated intent for the NIPRNet DMZ is to eliminate the need for direct connections to the Internet. Other initiatives in the 2011 budget estimate include:
Almost $1.8 million for an expansion of the Host-Based Security System (HBSS), developed in collaboration with security vendor McAfee, that will "provides a consistent way to accomplish configuration and management control across all endpoints" and enhance the system's capabilities to support greater situational awareness and provide better defense against emerging threats.
New hardware and maintenance support to the tune of $2.3 million for strengthening the externally-facing firewall infrastructure protecting the SIPRNet, the military's classified network.
A little under $2.2 million to augment DISA's insider threat capability "to help with the automation of detecting and mitigating DoD’s insider threats" stemming from individuals with authorized access to the network environment.
An additional $2.5 million to expand the Cross Domain Enterprise Service (CDES), which supports information transfers between DoD’s classified and unclassified networks.
In a story reported by the Hartford Courant, a series of requests for health records sent to Connecticut doctors by Ingenix have garnered attention both for the nature of the requests and the manner in which they were received. It seems the health analytics firm — a subsidiary of health insurer UnitedHealthcare — sent medical record requests by fax to doctors, as part of an ongoing program to review data in medical charts associated with Medicare claims. On its face, this is a valid use of personal health information, at least under HIPAA, but a representative for a physicians' organization in the state suggests that doctors do not ordinarily receive such requests by fax or respond in kind to an unknown requester. For its part, Ingenix says when it surveyed doctors, most indicated they preferred to be contacted by fax, so that's the channel the company used.
Despite the use of relatively old-school technology, this situation raises issues similar to those likely to be encountered in the coming era of electronic (and automated) health information exchange, where systems are configured to respond with medical records as long as the requester can be authenticated and the stated purpose for the request is valid. For instance, the interface specifications and legal agreements established for the Nationwide Health Information Network (NHIN) obligate a participating organization that receives an authenticated request for records to respond if the purpose in the request is "treatment." It's pretty easy to imagine a major health insurer like UnitedHealthcare would someday be a participating entity in the NHIN, and the automation of responses to requests such as this — while they would certainly be logged and made part of the accounting of disclosures required under HIPAA — might go unnoticed by individual practitioners and therefore be less likely to attract the attention of anyone wanting to validate that the record exchanges were actually appropriate.
Some would argue that Connecticut is experiencing a period of heightened sensitivity to health data disclosures, following the delayed notification of Connecticut residents who were affected by Health Net's breach of personal information and subsequent legal action taken by the state attorney general. The sincere hope in this case is that Ingenix was not misusing the trust in it (or its corporate parent) to solicit health data under false pretenses.
Former acting federal cybersecurity chief Melissa Hathaway used the public forum afforded her by the Internet Security Alliance yesterday to warn that the government is losing the sense of urgency it needs to tackle the many pressing cybersecurity challenges it faces. After receiving an award for her work reviewing national cybersecurity policy for the Obama administration, Hathaway called for more collaboration and more explicit action by both private and public sector organizations on improving security. In addition to a call for "bold steps forward," she said there needs to be more dialogue and transparency about the realities of the threats facing computing infrastructure. Her comments presumably would be well received by the current administration, which through new cybersecurity czar Howard Schmidt and policy statements by Secretary of State Hilary Clinton has emphasized a need for greater cooperation on security across sectors and among countries. Her words were probably welcomed by her hosts as well, as the ISA has called publicly in the past for greater government engagement with the private sector on security, including a recommendation that the government should offer incentives to companies to fix security problems.
Without at all diminishing the critical importance of moving forward aggressively on enhancing cybersecurity defenses and protecting critical infrastructure, it seems that nature of the dialogue and frequency with which the urgency is expressed is becoming part of the problem. Every new incident that comes to light is quickly labeled a "wake-up call," most recently including the Google attacks suffered in China. A quick Google search this morning for "cybersecurity wake up call" returns 376,000 hits — is this not sufficient to rouse us from our collective slumber? It's also hard to find fault with an approach that seeks to leverage public and private sector expertise, but given the breadth of collaboration routinely called for, it also seems likely that encompassing such broad input will impose its own set of barriers to taking action. The cybersecurity review for which Hathaway received the ISA's Dave McCurdy Internet Security Award was noteworthy not just for its ambitious scope and content of its recommendations, but also for the relative brevity (60 days) of the review in contrast to government analyses that can drag on for months or years. However, the report from the review was released over eight months ago, and only recently has any progress been made even on basic recommendations like the appointment of the cybersecurity czar and increases in federal cybersecurity programs for education and research and development. If the most recent wake-up calls are sufficient jarring to prevent once again hitting the figurative snooze button, the results should be seen in explicit actions, not in more or broader discussions.
The record snowfall in the Washington, D.C. area since last weekend has been notable for the widespread closings it has caused, and came with an unanticipated side effect for the federal government: the unavailability of its official operating status page on the Web. The Office of Personnel Management (OPM) provides an Operating Status page on its agency website, to which many federal employees turn to see if the government will be open (or, in an non-weather-related example, to check if the president closes the government early on Christmas Eve or other holiday). The volume of visitors to the Web site spiked to such a degree on Monday evening (according to a story in the Washington Post, Web traffic during the afternoon and evening hours on February 8 was approximately 4000 percent of the average daily volume) that the site was rendered unavailable; in response OPM configured its Web server to redirect traffic to a copy of the operating status notice posted on servers at OMB's data.gov site instead. This serves both as an example of quick thinking and suggests some pretty good contingency planning, although it's unclear if the need for an alternate Web hosting site was anticipated in advance or not.
As a mini-case study in contingency planning (or incident response, since this was an organic denial-of-service), OPM's actions demonstrate one approach among multiple alternatives. The agency chose to stand-up a backup site using existing data center capacity made available to it, so this was a sort of warm-site failover. Another approach would have been to mirror the primary site to an alternate and configure front-end routers or load balancers to automatically re-route traffic to the alternate site whenever volume exceeded a given threshold; the threshold would properly be tied to the existing Web server capacity, so no estimate of traffic spike levels would be necessary. A third option would be to scale the capacity of the existing Web server environment to be able to accommodate spikes in traffic. This option requires the ability to make good estimates of maximum traffic levels, or else at some point availability would still suffer. Still another option would be replicate key Web pages to an content distribution network provider, such as Akamai, so that user requests for popular content wouldn't hit the OPM server at all. The content replication approach has been used successfully in the government in the past — for instance, when the Centers for Disease Control and Prevention (CDC) experienced an unprecedented surge in volume to its Web site due to concerns over the anthrax attacks in the fall of 2001, the agency quickly contracted with Akamai to replicate most of its public Web content (which at the time was all static HTML), while it re-engineered its infrastructure to accommodate higher demand.
In many cases, it's simply not cost-effective to build infrastructure to accommodate exceptional loads, but it's foolish for any large organization to assume that traffic will never exceed its capacity, so having a contingency plan is an important element of any business continuity plan. Choosing the appropriate options often depends on whether the rise in traffic volume is a one-time (or very infrequent) event (as in OPM's case), or whether the spike corresponds to an ongoing increased demand (as it did for the CDC).
The "street view" feature of Google maps is proving to be yet another example of innovative uses of new technology raising legal and ethical questions about personal privacy, and of differing perspectives on just what is and isn't considered personal information here in the U.S. and abroad. In January, the 3rd U.S. Circuit Court of Appeals ruled that Google did not violate the privacy of a couple who sued the search giant, arguing that showing a picture of their home along with their street address was an unlawful invasion of privacy. In its ruling, the court said that to constitute an invasion of privacy subject to a private right of action, the behavior would have to be "highly offensive to a reasonable person," and someone approaching a home and taking a picture doesn't rise to that standard. The only part of Google's practice of employing armies of photographers to take the photos displayed in the street view was the fact that the photographer in this particular case drove into the couple's driveway in order to take the pictures; the appellate court ruled that the couple could proceed with a trespassing claim.
In contrast, Google's addition of street view images in Europe might be more problematic, as the Consumer Minister of Germany said publicly last week that she wanted to force Google to get the consent of individual citizens before pictures of their homes could be published online. This follows similar criticism of Google Earth by the German Justice Minister, and could result in new legal requirements to get Google to proactively solicit consent, rather than wait for people to object to photos after they are taken. Google has offered to obscure personally identifying features that might appear in the photos, such as license plates and faces, but only if individuals request that they do so. Fundamentally at issue here is when photographic images constitute personal information — there seems little debate that a photograph of a person is subject to privacy protections, but no clear handling for a picture of a person's possessions or domicile. The apparent divergence in U.S. and European perspectives on this issue is reminiscent of the disagreement about treating IP addresses as personally identifiable information. Standards about personal privacy are markedly different in European countries, so it seems at least feasible that Germany or another EU country could legislate new or existing personal data protections applied to residential photographs, regardless of American judicial opinions like the 3rd Circuit panel's that "no person of ordinary sensibilities would be shamed, humiliated, or have suffered mentally" from the simple act of having a picture taken of one's house.
Such an effort to regulate Facebook and its ilk in the United States would be a more difficult challenge, given the emphasis under current laws on making sure companies do what they say they will do (that is, that action matches policy), but without any requirement as to the specific practices they have to adopt. (A notable exception is with respect to data collection from minors under the age of 13.) The governing law for U.S. companies is the Federal Trade Commission Act (15 U.S.C. §45), which empowers the FTC to prevent unfair or deceptive trade practices — acting counter to published privacy policies is typically considered a deceptive trade practice. Despite the fact that Facebook explicitly reserves the right to change its privacy practices and terms of service at any time in its Statement of Rights and Responsibilities, the changes it implemented in December 2009 prompted a complaint to the FTC by a group of privacy and consumer advocates, arguing that the nature of the changes violated consumer protection laws. To date the FTC has taken no action in response to the complaint, although Facebook has been discussed in FTC-sponsored forums such as the Exploring Privacy roundtable series.
Facebook has used the attention surrounding the changes in its privacy practices to spin the story into a positive tale of increased consumer awareness of personal privacy. During the second session in the privacy roundtable series, Facebook's Director of Public Policy Tim Sparapani cited user statistics that 35% of its 350 million users were prompted by the change to actually go to the privacy settings section of their accounts and configure them. By any accounting, that's a lot of users, but a more interesting metric might be how many current users have not taken any action (even making a decision to accept the new default settings). Perhaps if more users were made aware of how Facebook's privacy practice facilitated third-party harvesting of personal data such as contact information, more of them would be motivated to act.
At a public meeting of the National Telecommunications and Information Administration's Online Safety and Technology Working Group last week, representatives from the FBI argued that in order to facilitate potential criminal investigations such as child pornography, Internet service providers should be required to record the Web sites their users visit, and retain those records for two years. Greg Motta, the head of the FBI's digital intelligence section, likened the potential for retention and use of Internet browsing records for investigatory purposes to the current requirement that long distance phone carriers retain details on calls made via their services. The intention for federal law enforcement to have access to such information is not new — as noted in press reports of the meeting, FBI Director Robert Mueller has wanted such record keeping since at least 2006, and asked the previous Congress for legislation to require it — but the current reiteration of this desire comes at a time (and under an administration) when the FTC and other government regulators are considering imposing constraints on online behavioral tracking by commercial entities. The period of time such records would be retained also stands in stark contrast to the industry trend among major search providers like Google, Yahoo!, and Microsoft to retain data for shorter and shorter periods of time.
The Federal Trade Commission (FTC) has now completed two of its three scheduled roundtable discussions as part of the "Exploring Privacy" series. The focus of these sessions is to raise and discuss issues, not to try to resolve them, but while the logistical details of the third meeting are still to be determined, privacy watchers are already analyzing what has transpired during the roundtable discussions and making predictions about what sort of FTC action may be likely to result from them. As you might expect, the parts of the discussions receiving the greatest attention vary somewhat based on who is providing the analysis. Good examples of these different perspectives include:
The FTC has suggested previously that it hopes to publish some form of report or findings from the privacy roundtable series once it is completed, likely sometime this summer. Until that happens, the steps the FTC will take to address appropriate privacy protection regulations and balance industry concerns will remain subject to lots of speculation.
In a marked contrast from the perspective presented by Ellen Nakashima in the Washington Post on Thursday — which said that in turning to the NSA for help with information security, Google was not primarily concerned with trying to conclusively identify the sources behind the attacks it disclosed in January — a story in the New York Times suggests that identifying the attackers with more certainty is motivating factor for Google seeking NSA's assistance. The article also points out that because the NSA has no statutory authority to pursue such an investigation, it would have made more sense for Google to approach the Department of Homeland Security, except that doing so Google might lead to the government trying to regulate Google's services as critical infrastructure, over which DHS also has oversight authority. A different interpretation of Google's actions, consistent with previous comments here, might be that Google went to the NSA based on a perception (one we would believe to be accurate) that the intelligence agency has greater expertise in information assurance, particularly in computer forensic analysis, and perhaps, as the Post article posited, because Google's priority is better security going forward, rather than a more exhaustive study of the attacks that already occurred. This seems especially logical given how many of the attack vectors apparently exploited against Google in the most publicized China attacks involved non-Google application software. To be sure, Google and other companies have disclosed hacking attempts (both successful and just attempted) against their internal computing environments seeking source code and other intellectual property; because fewer details about these specific attacks have been reported, it's hard to know how much or how little the victimized companies know about all the vulnerabilities they may be exposing to attackers.
In an agreement first reported in a story by the Washington Post and quickly circulated more broadly by dozens of news sources (to say nothing of bloggers and Twitters), Google will apparently seek the assistance of the National Security Agency (NSA) to improve Google's security posture and make the Internet giant better able to defend against cyber attacks. The key information in the story comes from the ever-popular sources speaking on the condition of anonymity so the full details are not certain, but it appears Google will open up its environment and network and system operations to the NSA so that the government's leading information assurance experts can evaluate Google's hardware and software for vulnerabilities and monitor Google's environment to identify the kinds of attacks or penetration methods being used against it. There's nothing obvious about the stated purpose of the pending collaboration that would suggest the NSA would want or would be given access to Google users' personal data, but the prospect of any routine information sharing with the government makes some privacy advocates uneasy.
To be sure, the NSA doesn't have the best track record in this regard, what with the extensive warrantless wiretapping the agency engaged in for several years following the September 11, 2001 terrorist attacks, until the program was ruled unconstitutional. Despite the unconstitutionality of the program, the NSA and the telecommunications companies that cooperated with the NSA in the surveillance operation have to date escaped legal liability, making some fearful that the agency can in effect do whatever it wants with little change of it being held accountable for violating individual privacy protections. However, the question posed in the Post article by Ellen McCarthy of the Intelligence and National Security Alliance, "At what level will the American public be comfortable with Google sharing information with NSA?" seems almost beside the point. There is little indication that Google has any plans to share personally identifiable user data with anyone, whether related to online searches or the use of its many applications and services. Google Mail users already implicitly consent to the automated scanning of the content of their email messages by Google (in order to serve targeted ads), and the sort of network traffic analysis likely to be involved in monitoring for malware or other threats doesn't focus on that type of data for its analysis. Concerns over routine or persistent government monitoring of private communications might be better directed to the government's Einstein intrusion prevention program (in which the NSA plays a significant role).
Despite the attention this latest report has garnered, this is not the first time Google and the NSA have worked together. Nearly two years ago the intelligence community publicized its use of Google search engine software and hardware appliances as part of the technical solution underlying Intellipedia, a private information sharing environment based on a wiki model that has been operational for nearly four years. At the time, the relationship between Google and agencies in the intelligence community prompted some of the same concerns over just how much of Google's data might end up being exposed to the government. On balance it seems what the government was most concerned with was the technology Google's solutions offered, not the data the company maintained.
There is another way to look at Google's decision to seek assistance from the NSA is that, having fallen victim to cyber attacks exploiting a variety of vulnerabilities, some not even part of Google's computing environment. Here we have a company that has discovered, and disclosed publicly, that its security posture is less robust than it would like, and now is actively seeking ways to improve its information security. At such a time any large company might seek advice from leading security consultants and practitioners, and ask for an evaluation of its current security practices and capabilities as well as recommendations for strengthening security and mitigating risks due to identified threats and vulnerabilities. If you're Google, your operation is very large and technically advanced, you have a market leadership position you'd like to protect, and you would presumably turn to the very best experts you could find. In the information assurance arena, NSA is the best. Even without the national publicity surrounding the latest attacks on Google and their influence on international diplomacy for the United States, it is also understandable why the NSA would be willing to help a company like Google (indeed, the Post article notes that other (unidentified) technology companies have sought assistance from NSA), and gaining access to personal email and other data from Google users just doesn't seem a reasonable motivation behind NSA's participation in this agreement.
Increased coordination of cybersecurity research and development efforts through a National Coordination Office for the Networking and Information Technology Research and Development, which would be tasked with producing a strategic plan for cybersecurity research and development;
Significant increases in the amount (rising to $90 million in fiscal 2014) of annual research funding for computer and network security research grants administered by the National Science Foundation;
The creation of a federal Cyber Scholarship for Service Program, which would pay for two years of educational studies towards a bachelor's or master's degree and three years towards a doctoral degree in "a cybersecurity field, " conditioned on scholarship recipients serving in the government for as many years as they received the scholarship;
Directing NIST to advance cybersecurity technical standards through means including representing the U.S. government in international technical development of cybersecurity standards; promoting cybersecurity awareness and education; and creating a dedicated research program related to identity management.
A recent article in Washington Technology cites findings in forensic investigations by the Verizon Business Risk team to highlight the difficulty many organizations have in identifying — much less responding to — security intrusions and data breaches. It seems that while plenty of companies have appropriate tools and security measures in place to collect data that would, if analyzed thoroughly, provide evidence of incidents occurring, too little of that data is actually scrutinized until well after the events begin. Verizon's forensic investigators more often than not find such evidence within the event logs maintained by the companies who call them in to investigate. The failure to achieve or maintain situational awareness in the face of increasingly common attacks can be attributed to multiple factors, including technical and analytic complexity, but all the industry experts quoted in the article point to insufficient focus on enforcement and awareness in security management. The all-too-common situation where technical or functional means of enforcement are lacking, even with appropriate security policies in place, is a recurring theme, and one we addressed a couple of months ago in the context of guarding against internal threats. The rise in interest in and use of security incident and event management (SIEM) provides some evidence that enterprises are becoming more aware of what they're up against in terms of cyber threats, but the utility of these controls is tied directly to the level of organizational commitment to put the commensurate security practices in place, and to invest in (human) security analysts and not just in tools.
With a bit of a different take on the same sort of problem, in another article published by Washington Technology this week, Sentek Consulting founder Erik Basu suggests that the emphasis on attack attribution by some government security programs is a position more private sector entities should seek to emulate. Federal agencies have several reasons for pursuing this type of forensic investigation, from the simple attempt to gain a better understanding of how vulnerabilities are exploited and how similar incursions might be prevented in the future, to the political, practical, and diplomatic considerations that constrain potential responses, including retaliatory actions. In general, government agencies also seem less reluctant to disclose cybersecurity incidents, both within the government community (as required under OMB guidelines) and in public. The fact that Google actually went public with the details of the attacks against it in China is in some ways more notable than the specifics of the attacks themselves. The government doesn't face the same competitive drivers that commercial enterprises do, but Google's disclosure is leading some companies and lots of security analysts to suggest that the benefits of greater disclosure may outweigh any potential negative impact.
Google has decided to sunset, effective in March, the functionality in its Blogger web logging service that allowed Blogspot content to be published via FTP so that blogs could be served on other hosts. Because of this change in policy we have re-hosted the SecurityArchitecture blog using a Google custom domain, with the result that the URL has changed to http://blog.securityarchitecture.com and the look and feel of the blog has changed a bit, with the loss of some consistency with our SecurityArchitecture.com website but also with the addition of several new Blogger gadgets that were not previously available when publishing the blog to our own hosted server. There may be some variability in the way the blog looks over the short term as we tinker with settings and settle on the page elements that we think work best here.
Security analysts reported yesterday a noticeable spike in network traffic associated with the Pushdo botnet, whose computers somewhat curiously are sending large numbers of fake SSL connections to lots of high-profile websites, including those of the CIA, FBI, PayPal, Yahoo, Mozilla, Google, SANS, and Twitter. The traffic is noteworthy both for its volume and for the lack of any obvious reason why it is occurring; one security expert suggested the botnet might be sending this sort of traffic absent a real attack to make its future traffic seem less anomalous, essentially to help hide the location of the botnet's command and control center. While the observed traffic volume was high enough to be noticeable, it stops far short of the level necessary to effect a denial-of-service attack, so observers are left wondering just what the point of the activity is, and what might be coming next. The concept of "attack anticipation" has long been a goal of some types of intrusion detection systems and, more recently, of security information and event management (SIEM) tools. The idea here is that by looking at events observed and correlated over time, a potential attack victim can try to predict if something really significant is on the way. In this case, it's pretty unusual for a botnet to draw a lot of attention for itself, so while the good news seems to be that those monitoring the network activity are aware of it, there little speculation, nevermind consensus, on what these initial observations mean.
In an op-ed piece in today's Washington Post, Harvard Law professor Jack Goldsmith notes Secretary of State Hilary Clinton's recent speech on Internet freedom and suggests that before the United States can credibly ask other countries to do more to limit cyber attacks and hold accountable individuals and organizations performing those attacks, we need to take steps to acknowledge our own country's role in the global cybersecurity problem. Goldsmith points to the extensive use of botnets and botnet-based attacks originating from the U.S. as well as American activities in the area of "hactivism" as well as the U.S. government's classified-yet-assumed capabilities to launch offensive cyberattacks if necessary (to say nothing of the NSA's cyber intrusion and intelligence gathering expertise). With a line of reasoning consistent to one expressed in this space in the context of the Google-China hacking incident, Goldsmith notes that the U.S. performs many of the same actions we condemn elsewhere, largely because we consider the motives behind our actions to provide justification. Goldsmith goes one step further to argue that because cyberattack methods can in fact be used for positive purposes, it would be a mistake for the U.S. to suspend or prevent these domestic activities, and invokes the sentiments of the NSA's Lt. Gen. Keith Alexander, nominated to be lead the newly-formed U.S. cyber command, who essentially says the best defense is a good offense. The relative merits of such arguments notwithstanding, Goldsmith is quite correct when he suggests that the U.S. cannot advocate the creation and enforcement of worldwide norms in cyberspace without including American operations and activities as part of the equation.
Information security and privacy professional, researcher, teacher, and advocate. Recently completed a doctorate in management, with dissertation research focusing on the role of trust and distrust in achieving cooperation among organizations.