SEC Says No More Mr. Nice Guy on Investment Adviser Cybersecurity

Over the last couple years, the SEC’s cybersecurity bark has been worse than its bite.  Its Office of Compliance, Inspections, and Examinations issued examination priorities in 2014.  Commissioner Aguilar warned public company boards that they had better get smart about the topic a few months later.  The results of OCIE’s cybersecurity exam sweep were released in March of this year.  And the Investment Management Division said words, not many words, about investment advisers’ responsibilities in this area in July.

Alleged Facts

What it hasn’t done recently is sue somebody for violating Reg. S-P.  But yesterday it did.  According to the SEC’s settled administrative order:

  • St. Louis-based R.T. Jones Capital Equities Management stored sensitive personally identifiable information (PII) of clients and others on its third party-hosted web server from September 2009 to July 2013.
  • Throughout this period, R.T. Jones failed to conduct periodic risk assessments, implement a firewall, encrypt PII stored on its server, or maintain a response plan for cybersecurity incidents.
  • An unknown hacker gained access to the firm’s web server in July 2013, rendering the PII of more than 100,000 individuals, including thousands of R.T. Jones’s clients, vulnerable to theft.

The Safeguards Rule

Whoops.  But while all of that sounds bad, it’s not actually what the firm is being sued over.  At issue is Reg. S-P’s Rule 30(a), the Safeguards Rule, which says, “Every broker, dealer, and investment company, and every investment adviser registered with the Commission must adopt written policies and procedures that address administrative, technical, and physical safeguards for the protection of customer records and information.”  And unfortunately, R.T. Jones allegedly failed entirely to adopt written policies and procedures reasonably designed to safeguard customer information.  Put another way, if R.T. Jones did have written policies and procedures designed to avoid the failures bulleted above, the cyber attack might have been avoided and we wouldn’t be here.  It’s paying a $75,000 civil penalty to put this matter behind it.

Fortunately, to date, R.T. Jones has not received any indications of a client suffering financial harm as a result of the attack.  And the firm appears to have acted quickly and responsibly once it did discover the breach.

Three Thoughts

I have three quick thoughts.  First, this is a relatively easy case for the SEC to bring. RT. Jones didn’t just have inadequate policies and procedures.  According to the SEC’s order, it didn’t have any written policies and procedures reasonably designed to safeguard its clients’ PII.   Second, over 90% of the individuals whose information was compromised were not even R.T. Jones clients, but participants in an investment plan in which R.T. Jones had joined.  The information appears to have been useful to R.T. Jones in the aggregate, but perhaps not so as to individuals.  If not, the firm might have purged that information from its systems and avoided the liability from losing their data.  Finally, periodic risk assessments, firewalls, encryption, and a cybersecurity response plan seem like good ideas right now.  But you knew that already. 

FCC Stakes Out Privacy Territory in Broadband Privacy Workshop

If you thought all the action in privacy regulation centered around the Federal Trade Commission, the Federal Communications Commission would like you to think again. Yesterday, April 28, the FCC held a 3-plus hour workshop that started the regulatory “conversation” on the manner in which the FCC can or should regulate consumer broadband privacy.

Chairman Wheeler kicked off the event with opening remarks that included this unequivocal statement: “Privacy is unassailable.” He also said that “changes in technology do not affect our values.” From these words and the text of the FCC’s “Open Internet” order released earlier this year, not to mention the FCC’s recent $25 million data breach consent decree with AT&T, it is clear the FCC intends to be involved in regulating consumer privacy.

Yesterday’s workshop follows the recent Open Internet order in which the agency determined it would apply certain aspects of its Title II authority to the Internet (namely, certain “common carrier” provisions of the Communications Act). The order has broad impact on issues like consumer access to broadband content that have been widely written about. But what the order means for privacy is that the FCC’s rules on “customer proprietary network information” or CPNI, which have historically applied to traditional telephone companies and interconnected VoIP providers, may apply more broadly in some form or fashion to others in the broadband ecosystem—particularly, broadband Internet access providers.

By way of background, CPNI (defined in Section 222(h)(1) of the Communications Act) is information collected by telecommunications carriers about their customers.  CPNI includes things like “quantity, technical configuration, type, destination, location, and amount of use of a telecommunications service” that a customer subscribes to and billing information. It is a fairly specific definition, and it doesn’t include personal information like name, phone number, address, etc. 

Section 222(a) of the Communications Act requires telecommunications carriers to protect the confidentiality of customer information, and Section 222(c) restricts the ability of telecommunication carriers to use, disclose, or permit access to individually identifiable CPNI without the customer’s approval or as required by law. 

The Open Internet order makes plain that the FCC intends to apply these CPNI confidentiality provisions (or some form of them) to broadband and broadband Internet access providers. But the FCC will need to adopt new rules to apply the CPNI provisions of the statute in this way. 

Yesterday’s workshop started that process with a discussion among stakeholders, though no formal rulemaking has been launched. Panelists discussed the privacy implications of broadband Internet access (for example, the kind of data broadband providers have access to) as well as specific concerns with applying Section 222 to broadband Internet access services. One common theme was the potentially overlapping jurisdiction of the FTC and the FCC in the area of privacy. But, significantly, one thing the FCC brings to the table in this area is general rulemaking authority, which the FTC lacks. 

We’ll have to watch and wait to see if a notice of inquiry, notice of proposed rulemaking, or other agency guidance will come.

The FCC typically uploads events like this to its archive, so check here or here in a few days if you would like to view the full event.


FTC Commissioner Comments on Consumer Privacy Bill of Rights

Last week, we posted about the Consumer Privacy Bill of Rights “discussion draft” released by the Obama Administration. On Thursday, March 5, at the annual U.S. meeting of the International Association of Privacy Professionals (which I attended), FTC Commissioner Julie Brill answered questions about her take on the bill and other policy issues. Here are just a few comments from that discussion that merit a follow-up post:

  • Commissioner Brill stated in no uncertain terms that the draft bill is not protective enough of consumers. At various times, she said there are “serious weaknesses in the draft,” “there’s no there there,” it needs “some meat,” and “where are the boundaries?” She mentioned a specific example of a more consumer-protective approach relating to consent to certain data practices. She indicated she would like to see the bill require affirmative express consent of the individual for (a) material retroactive changes to a privacy statement and (b) use of sensitive information out of context of the transaction in which it was collected.
  • Although Section 402 of the draft bill provides that the FTC’s unfair and deceptive authority under Section 5 of the FTC Act remains intact, Commissioner Brill expressed concern that it is not clear enough in the bill that the FTC would retain its full authority to enforce the “common law” of privacy as developed in its prior enforcement actions. 
  • Will anything come of the bill this Congress? Commissioner Brill said she expects it’s unlikely given other legislative priorities at this time.

Commissioner Brill commended the Administration for grappling with tough issues and working to improve privacy overall. However, hers is one more voice calling for additional work on broad federal consumer privacy legislation.

Consumer Privacy Bill of Rights Act - A Mixed Bag

Late last week, President Obama released a “discussion draft” of the Administration’s long awaited Consumer Privacy Bill of Rights Act.  At first blush, the results are a mixed bag:  some good, some not so good, much work among stakeholders left to be done.

It didn’t take long for consumer advocates, and even one FTC Commissioner, to say the draft legislation doesn’t go far enough.  The Internet has been rife with posts this week about the bill’s problems and shortcomings.  In summary, for most, the bill landed like a lead balloon.

Still, the Administration released the bill as a “discussion draft”—signaling the draft legislation is a just a step and an invitation for further conversation.  For a measured perspective considering the bill through this lens, read former Obama Administration official Nicole Wong’s thoughtful article

While it’s certainly far from perfect, my take is that the bill isn’t all bad.  Here are just a few initial pros and cons to the bill that I’ve identified (in no particular order):

  • Pro:  many principles are based on fair information practices familiar from existing federal statutes, flexibility and consideration of measures that are reasonable in context, availability of safe harbor protections, exceptions for de-identified data, delayed enforcement to allow parties time to adjust to the law’s requirements.
  • Con: loosely defined requirements, definitional uncertainty, preemption and enforcement concerns.

One item of note is that the security provisions in Section 105 (a) codify, at a very high level of generality, some of the principles that we’ve been advising our clients about:  for example, taking steps to identify internal and external risks to privacy and security of personal data and implementing and regularly assessing safeguards to control risks.  (Of course, it’s a separate thing all together to have recommendations take on the force of law.)

In the end, it may have been inevitable that this bill would be a disappointment to some.  After all, the public has been waiting on it since 2012.  During that time, there have been many, many high-profile breaches of consumer information.  The appetite for more privacy and security protections has only grown over time.  But it will take a delicate balance to provide desired protections while at the same time making legal requirements workable for both consumers and the businesses offering products and services consumers want.

To be sure, there will be more to come from the Consumer Privacy Bill of Rights—stay tuned.

Two-Factor Authentication May Be Coming to a Bank Near You

Ed. Note: This entry is cross posted from Cady Bar the Door, David Smyth's blog offering Insight & Commentary on SEC Enforcement Actions and White Collar Crime.

When I was at the SEC and online broker-dealers’ customers were the victims of hacking incidents, I used to wonder, why don’t the broker-dealers require multi-factor authentication to gain access to accounts? It was a silly question. I knew the answer. Multi-factor authentication is a pain and nobody likes it.

Do you know what it is? Here’s what Wikipedia says, so it must be true:

Multi-factor authentication (MFA) is a method of computer access control which a user can pass by successfully presenting authentication factors from at least two of the three categories:

·       knowledge factors (“things only the user knows”), such as passwords

·       possession factors (“things only the user has”), such as ATM cards

·       inherence factors (“things only the user is”), such as biometrics.

The idea is, hackers might figure out your password, but they won’t be able to figure out a number that changes every 30 seconds on a card you carry or on your cell phone. They won’t be able to replicate your fingerprint. That’s the idea, anyway. Brokers and banks have been loathe to require multi-factor authentication because it’s inconvenient and customers often hate it.

But here comes Ben Lawsky, the Superintendent of New York’s Department of Financial Services, who just unveiled a number of proposals to increase cybersecurity at banks under his jurisdiction. One of these is to require that banks use multi-factor authentication. This move could take a lot of the economic pressure off banks that would otherwise like to implement this control for its customers, but have been unwilling to do so for fear of losing those customers to rivals. If everybody has to do it, there’s not a lot of fear from imposing it unilaterally.

That’s not all Lawsky has in mind. His proposal also includes:

·       requiring senior bank executives to personally attest to the adequacy of their systems guarding against money laundering;

·       ensuring that banks receive warranties from third-party vendors that those providers have cybersecurity protections in place;

·       random audits of regulated firms’ transaction monitoring systems, meant to catch money laundering; and

·       incorporating targeted assessments of those institutions’ cybersecurity preparedness in its regular bank examinations.

Lawsky’s proposals could be a big deal. Stay tuned.

Big Announcement, Small UAS: FAA Launches Commercial Drone Proceeding

            Unless you have been completely disconnected from all media, you are probably already aware that on Sunday, February 15, 2015, the FAA announced the release of its long-awaited rules to govern commercial sUAS (small unmanned aircraft systems) operations in the United States. The FAA’s proposed sUAS rules arrived like a barely-late valentine or box of candy, with the recipients hoping to read loving prose and enjoy fresh, rich chocolates. At this point, of course, the rules are merely a proposed regulatory regime (as embodied in a document that is called a “Notice of Proposed Rulemaking” or “NPRM”), and it will surely take many months—probably a couple of years—for the rules to be finalized and adopted, and to go into effect. (Only then will we know for sure whether the valentine message was really a “dear John” letter or whether the candy was stale and half-eaten.) It is important to understand that, for now, the FAA’s current prohibition on commercial UAS operations remains in effect, except for operators that have obtained a Section 333 Exemption from the FAA. (To date, nearly 30 entities have received exemption grants from the FAA.)

            The proposed regulatory regime was described by the FAA on a February 15 press conference call as a “very flexible framework” that will “accommodate future innovation in the industry.” While industry stakeholders may ultimately disagree over just how flexible the proposed rules are or should be, stakeholders do generally agree that the FAA’s release of the NPRM is a big step (albeit somewhat overdue) in the right direction. You can access the FAA’s NPRM here. (The FAA also published a “fact sheet” as well as a short summary of the highlights of the proposed rules.)

Proposed sUAS Rules

            Among the various limitations that the FAA has proposed for commercial sUAS operations are the following (caveat: this is neither an exhaustive nor detailed list of all the operational limitations and requirements proposed in the NPRM):

  • Vehicles subject to the sUAS rules will be defined as aircraft that weigh less than 55 pounds (25 kg)
  • Only visual line-of-sight (“VLOS”) operations will be allowed; i.e., the small unmanned aircraft must remain within VLOS of the operator or visual observer (“VO”) (i.e., if a VO is used; the proposed rules allow—but do not require—the use of a VO)
  • No person may act as an operator or VO for more than one unmanned aircraft operation at one time
  • Pilots of sUAS will be considered “operators.” Operators will be required to:
    • Pass an initial aeronautical knowledge test at an FAA-approved knowledge testing center;
    • Be vetted by TSA (Transportation Security Administration);
    • Obtain an unmanned aircraft operator certificate with an sUAS rating (like existing pilot airman certificates, it will never expire);
    • Pass a recurrent aeronautical knowledge test every 24 months;
    • Be at least 17 years old;
    • Make available to the FAA, upon request, the sUAS for inspection or testing, and any associated documents/records required to be kept under FAA rules;
    • Report an accident to the FAA within 10 days of any sUAS operation that results in injury or property damage;
    • Conduct preflight inspections to ensure the sUAS is safe for operation.
  • At all times the small unmanned aircraft must remain close enough to the operator for the operator to be capable of seeing the aircraft with vision unaided by any device other than corrective lenses (the use of binoculars would not satisfy this restriction)
  • sUAS operations may not occur over any persons not directly involved in the operation
  • sUAS operations must occur during daylight hours only (official sunrise to official sunset, local time)
  • Small unmanned aircraft will be required to yield right-of-way to other aircraft, manned or unmanned
  • Small unmanned aircraft will be allowed to operate with a maximum airspeed of 100 mph (87 knots)
  • Small unmanned aircraft will be allowed to operate at a maximum altitude of 500 feet above ground level
  • sUAS operations will be permitted to occur only when conditions allow minimum weather visibility of 3 miles from the control station
  • Limitations in airspace classes:
    • No sUAS operations will be allowed in Class A (18,000 feet & above) airspace
    • sUAS operations will be allowed in Class B, C, D and E airspace only with ATC (Air Traffic Control) permission
    • sUAS operations in Class G airspace will be allowed without ATC permission

Many of these requirements dovetail with (or are at least similar to) the limitations, requirements, and restrictions that have been imposed by the FAA in its various Section 333 Exemption decisions. In fact, some of the rules proposed in the NPRM would be less restrictive and more flexible than those imposed on operators in certain Section 333 Exemption decisions.

“Micro” UAS and Model UAS

        NPRM proposes a “micro” UAS classification that contemplates operations of small unmanned aircraft that weigh up to 4.4 pounds (2 kg; small-scale sUAS and hence “micro”), only in Class G airspace, only during daylight hours, at altitudes no higher than 400 feet AGL. Micro UAS operations would be permissible over people not involved in the operation of the unmanned aircraft, provided the operator certifies he or she has the requisite aeronautical knowledge to perform the operations. Other, additional restrictions, as set forth in the NPRM, would apply to the proposed micro UAS classification.

With respect to hobbyists who fly model unmanned aircraft for recreational purposes, the NPRM does not propose to change the rules of the road for such hobbyists, so long as their operation of model UAS satisfies all of the criteria applicable to model unmanned aircraft.

Presidential Memorandum: UAS Privacy Framework 

           In addition to—and presumably in concert with—the FAA’s release of the NPRM, President Obama on the morning of Sunday, February 15, issued a Presidential Memorandum aptly titled “Promoting Economic Competitiveness While Safeguarding Privacy, Civil Rights, and Civil Liberties in Domestic Use of Unmanned Aircraft Systems” (“UAS Privacy Memorandum”) in order to create a framework to begin to address some of the privacy concerns that have been voiced by the American public (and even by a U.S. Supreme Court Justice). (Reports had surfaced months ago suggesting that the UAS Privacy Memorandum was in the works.)

            Among other things, the UAS Privacy Memorandum requires federal agencies, “prior to deployment of new UAS technology and at least every 3 years, [to] examine their existing UAS policies and procedures relating to the collection, use, retention, and dissemination of information obtained by UAS, to ensure that privacy, civil rights, and civil liberties are protected. Agencies shall update their policies and procedures, or issue new policies and procedures, as necessary.” In addition, federal agencies must “establish policies and procedures, or confirm that policies and procedures are in place, that provide meaningful oversight of individuals who have access to sensitive information (including any PII [personally identifiable information]) collected using UAS . . . [and] require that State, local, tribal, and territorial government recipients of Federal grant funding for the purchase or use of UAS for their own operations have in place policies and procedures to safeguard individuals’ privacy, civil rights, and civil liberties prior to expending such funds.” These requirements represent a logical and reasonable starting point for ensuring privacy protection at the federal level if and when agencies engage in UAS operations.

            Also of significance, the UAS Privacy Memorandum tasks NTIA (the National Telecommunications and Information Administration) to initiate, by mid-May 2015, a “multi-stakeholder engagement process to develop a framework regarding privacy, accountability, and transparency for commercial and private UAS use.” As such, the UAS Privacy Memorandum provides a formal structure in which various aspects of privacy that will be implicated by commercial and private UAS operations can and will be debated and addressed. As with the sUAS rules themselves, only time will tell whether the final results of the UAS Privacy Memorandum are weak or strong, satisfactory or dissatisfactory to UAS stakeholders, including the American public generally.  

 Issuance of the NPRM Doesn’t Mean You Can Ignore State Law

            State and local jurisdictions continue to contemplate legislation governing the use of UAS by individuals, commercial entities, and law enforcement, and much remains to be written about such ongoing efforts. As I’ve written previously, the State of North Carolina enacted several provisions governing UAS in 2014. Until the FAA’s rulemaking results in final rules, many of the North Carolina provisions are of little practical significance. Nevertheless, drone enthusiasts in North Carolina must remain mindful of these state-specific laws, as should any UAS operator in any other state with applicable laws in place.

More to Come

            For many industry stakeholders and drone enthusiasts, the release of the NPRM surely represents a “Harry Potter” moment: when a new Harry Potter book hit the bookshelves, people would line up for hours to get a copy and would stay up all night and skip school or work in order to read it. I’m certain that a similar phenomenon has been underway since Sunday, February 15, when the NPRM first became available on the FAA’s website. (I would hazard a guess that the FAA’s website had more traffic on February 15 than any other day in the history of While I don’t recommend that anyone play hooky from work or school in order to read the 195-page NPRM, I do encourage you to celebrate President’s Day (February 16) by reviewing the NPRM—while the FAA Staff enjoys its well-deserved federal holiday—and I do wish you all happy reading and sweet, post-valentine dreams. 

            Onward and upward (but, until the FAA issues final rules, not without an exemption or not more than 400 feet, please, with a model drone used solely for recreational purposes)! 


Sony Employees Sue, Calling the Breach an "Epic Nightmare"

by Bryan Starrett, Employment Law Attorney,

You have probably heard about the recent data breach at Sony; after all, it’s not often that Kim Jong Un and Angelina Jolie are mentioned as part of the same story. Unlike other recent high profile hacks, the recent Sony hack appears to be somewhat different in character: the hackers appear to care most about using the information stolen from Sony to bring shame and scorn to the company, rather than for their own pecuniary gain.

And the story appears to continue down the proverbial rabbit hole, with reports of a tongue-and-cheek offer of investigative cooperation from the North Koreans, and the recent revelation that all of North Korea’s internet is down, perhaps in retaliation for the recent attacks.

Amidst the intense Hollywood and international intrigue, an important group of victims isn’t receiving much attention: Sony employees. Indeed, the hack has allegedly resulted in the theft of social security numbers, birth dates, health information, and other sensitive data from thousands of Sony employees. In response, two Sony employees swiftly filed a federal class-action lawsuit against their employer, summing up their claims in the opening paragraph of their complaint:

An epic nightmare, much better suited to a cinematic thriller than to real life, is unfolding in slow motion for Sony’s current and former employees: Their most sensitive data, including over 47,000 Social Security numbers, employee files including salaries, medical information, and anything else that their employer Sony touched, has been leaked to the public, and may even be in the hands of criminals.

The employees have brought claims for negligence, as well as for various statutory data breach claims under both California and Virginia law.

Unlike the more “typical” breach case, where customers are the victims of stolen credit card numbers or other personal information, this action is unique in at least two critical aspects: the nature of the data breached, and the employer/employee relationship between Sony and the plaintiffs. Employers often owe their employees heightened duties of care to their employees, thanks to the particular nature of the employer/employee relationship. However, the duties employers owe their employees regarding the protection of employee data is largely uncharted territory, and this action may shed significant light on the standard to which employers will be held in protecting employee data.

HHS Settlement Shows: "You'd Better Implement Those IT 'Patches' and 'Updates' or Be Ready to Pay the Price."

by Forrest Campbell, Health Law Attorney, 

In December 2014, the U.S. Department of Health and Human Services ("HHS") and Anchorage Community Mental Health Services ("ACMHS") settled alleged HIPAA violations for $150,000.

Don't be misled--this settlement is not important just for parties subject to HIPAA. It's important to anyone who maintains confidential information in electronic form.

Here's what happened according to HHS. ACMHS failed to regularly update its IT resources with available patches, and ACMHS used outdated, unsupported software. As a direct result of these two factors, malware was able to compromise the security of ACMHS's IT system, resulting in a data breach of the protected health information of 2,743 individuals. As HIPAA requires, ACMHS notified HHS of the breach, and an HHS investigation followed. The investigation led to the settlement. The period from the start of the investigation to the signing of the settlement was 2 ½ years--which probably represents a lot of hours and money for ACMHS.

These events show how important security patches and software updates are for all parties with confidential electronic information. If you fail to diligently implement patches and updates--no matter what business line you're in--malware might infiltrate your IT system and cause a data breach. Data breaches often require notice to the individuals affected and to state and federal authorities, and often lead to investigations, lawsuits, and/or settlements.

Apparently, ACMHS could have avoided the entire matter if it had implemented proper patches and updates.

Although the lessons from these events are important across all industries, parties subject to HIPAA should recall that the HIPAA security rule essentially mandates that critical security patches and updates be implemented. For example, the security rule broadly requires that HIPAA covered entities and business associates must:

SIFMA Issues Cybersecurity Regulatory Principles

by David Smyth, Securities Enforcement Attorney, at and blogger at Cady Bar the Door

Does everyone feel compelled to comment on cybersecurity issues? It seems that way. And on October 20th the Securities Industry and Financial Markets Association jumped deeper into the fray when it issued its Principles for Effective Cybersecurity Regulatory Guidance. SIFMA goes into substantial depth for each one in the document itself, but without further ado, here they are, followed by my comments or summaries on each:

1.     The U.S. government has a significant role and responsibility in protecting the business community.

My former boss John Stark likes to say, “A data breach is the only crime where you’re the victim and you’re treated like a criminal.” Probably true! In that spirit, SIFMA would like the government’s enforcement efforts to be focused on computer criminals and not securities firms that are doing their best to protect their clients’ information.

2.     Recognize the value of public–private collaboration in the development of agency guidance.

The Principles cite The National Institute of Standards and Technology’s Cybersecurity Framework (discussed here) as a useful model of public-private cooperation that should guide the development of agency guidance. Along those lines, SIFMA suggests that an agency working group be established that can facilitate coordination across government agencies and self-regulatory organizations, and receive industry feedback on suggested approaches to cybersecurity.

3.     Compliance with cybersecurity agency guidance must be flexible, scalable and practical.

Again with the NIST Cybersecurity Framework, which by its terms is “envisioned as a ‘living’ document, improved based on feedback from users’ experiences, while new standards, guidelines, and technology” are built into future versions. SIFMA thinks the same should be true for the standards and practices recommended by agencies.

4.     Financial services cybersecurity guidance should be harmonized across agencies.

Here’s what SIFMA says: “Financial regulators should coordinate to avoid a counter-productive proliferation of overlapping standards and overlapping regulators. A diffusion of regulatory principles undermines focus and diverts valuable resources for companies and agencies alike.” They’re right to say this, but oh, dear, this is hard. It’s not easy to get people on board within an agency, or even an agency division. Cross-agency coordination is well-nigh impossible.

5.     Agency guidance must consider the resources of the firm.

SIFMA rightly notes that “[s]ophisticated prevention measures are sometimes financially

prohibitive for smaller firms and burdensome standards could drive these important players out of the market.” Leaving financial services solely in the hands of giant players who can out-comply smaller ones would be horrendous.

6.     Effective cybersecurity guidance is risk-based and threat-informed.

This one is closely related to Nos. 3 and 5. Basically, SIFMA hopes there won’t be regulation for regulation’s sake. “Agencies should premise their guidance on a cost-benefit analysis that takes into account the benefits to firms and consumers versus the compliance costs and potential burdens suffered by consumers.”

7.     Financial regulators should engage in risk-based, value-added audits instead of checklist reviews.

I can’t help but see this as a shot at the SEC’s investment adviser cybersecurity examination module, publicly released in April 2014 to help advisers prepare for regulatory exams in this area. As Bob Plaze notes here, a one-size-fits-all checklist could be punitive for smaller firms that can’t afford to keep up.

8.     Crisis response is an essential component to an effective cybersecurity program.

Needless to say? SIFMA also says explicitly here what it merely implies in No. 1: “Both firms and their clients are the victims when breaches or incidents occur.”

9.     Information sharing is foundational to protection, must be limited to cybersecurity purposes, and must respect firms’ confidences.

While SIFMA appreciates the guidance the Justice Department and the Federal Trade Commission have recently given to assuage antitrust concerns associated with inter-firm information sharing to fight computer crime, more such assurances are always better. Put another way, don’t replace one regulatory concern (cybersecurity) with another (antitrust liability).

10. The management of cybersecurity at critical third parties is essential for firms.

Keeping a close watch on third-party vendors is a crucial cybersecurity issue for all businesses. SIFMA would like some help from the government on this huge job: “Regulators should increase their coverage of third parties and put pressure on these third parties to meet the regulatory expectations of the financial services firms that they serve.”


Be careful out there.

Regulation From All Sides?--The FCC and FTC Tag-Team Privacy and Data Security

The U.S. Federal Trade Commission usually gets much of the glory for policing privacy and data security issues. For example, just a few months ago the FTC achieved a settlement requiring Fandango and Credit Karma to establish comprehensive data security programs and biennial security assessments following charges that the companies misrepresented to consumers the level of security of their mobile apps and failed to secure the transmission of consumers’ sensitive personal information. And who could forget the FTC’s Google Buzz settlement from 2011?

But recently the FTC has been sharing the privacy and data security spotlight with a different agency—the U.S. Federal Communications Commission. What?

In a post late last year, Jedidiah Bracy wondered if the FCC was becoming envious of the FTC’s enforcement role in the privacy arena. He speculated that we’ll see more jurisdiction-sharing between these two federal agencies in this area over time. 

I think Jedidiah is right. 

Exhibit A: Last fall, the FCC announced its very first data security enforcement action. The full text of the FCC notice proposing the fine is linked here. In this case, the FCC proposed a $10 million fine against two telecommunications companies, TerraCom and YourTel, for alleged violations of provisions of the Communications Act and FCC rules that require companies to protect the privacy of phone customers’ personal information. According to an FCC announcement, “[t]he companies allegedly breached the personal data of up to 305,000 consumers through their lax data security practices and exposed those consumers to identity theft and fraud.”  The data at issue were the social security numbers, names, addresses, driver’s license numbers, and other sensitive information of low-income consumers who provided the data to establish eligibility for Lifeline telephone services. The personal information was allegedly exposed to public view on the Internet (and apparently discovered by investigative reporters) without any password protection. The harm was compounded when the companies allegedly failed to notify all potentially affected customers of the breach. 

The Communications Act requires telecommunications carriers to protect the confidentiality of consumer “proprietary information,” and requires telecommunications carriers’ practices related to providing communication services to be “just and reasonable.” According to the FCC, TerraCom and YourTel violated these requirements. Among other things, the companies failed to employ reasonable data security practices to protect consumer proprietary information and misrepresented their data security practices in their privacy policies.

In addition to being the FCC’s opening salvo in the data security area, this recent action is the largest proposed privacy fine in the FCC’s history. 

Exhibit B: Just over a month earlier, the FCC adopted a settlement with Verizon, in which Verizon agreed to pay a $7.4 million fine to settle an FCC investigation of allegations that Verizon used its customers’ personal information when tailoring marketing campaigns without first providing notice and obtaining customer consent (as required by FCC rules implementing the Communications Act).

The good news is these cases don’t mean that all companies must add the FCC to the list of potential regulators that may bring privacy and data security enforcement actions against them. For one thing, both the TerraCom/YourTel and Verizon enforcement actions involve telecommunications companies otherwise subject to the jurisdiction of the FCC. Not every business falls within the scope of the Communications Act—not by a long shot. 

But what I think these cases illustrate well is that the FCC sees itself as, among other things, a consumer protection agency. It shares this world view with the FTC. These two cases show us that, like the FTC, the FCC is willing to “go big” in the area of consumer privacy and data security for those companies where the FCC has a regulatory hook—that means wired and wireless telecommunications providers as well as cable, satellite, radio, and television companies. The FCC has some privacy and data security muscle that it is apparently ready, willing, and able to flex.

Photojournalist Has No Privacy Protection Act Claim Where Search Was Supported By Probable Cause

In a decision released this week, a panel of the Fourth Circuit affirmed the decision of the Eastern District of Virginia holding that a photojournalist had no claim under the federal Privacy Protection Act for a search of the journalist’s home conducted pursuant to a warrant, where law officers had  probable cause to believe the journalist was involved in a crime.

The plaintiff in Sennett v. U.S., No. 11-1421 (4th Cir. Jan. 30, 2012), was a photojournalist who routinely covered protests, political demonstrations, and acts of “grassroots activism” and published her images under the name “Isis.” In her complaint, she alleged that her work was published in the mainstream media as well as on her own blog and on other websites.

In April 2008, the plaintiff was covering what she believed to be a demonstration at the International Monetary Fund’s annual meeting at a hotel in Washington, D.C. Acting on a tip, she arrived at the scene at approximately 2:30 a.m. and videotaped the demonstration.

Ultimately, the protest became criminal, though the plaintiff claimed no knowledge of the protesters’ plan to destroy private property. The protesters entered the hotel lobby, set off firecrackers and pyrotechnics, threw paint-filled balloons, and shattered a large glass window, causing an estimated $200,000 or more in damage.

Officials with the FBI Joint Terrorism Task Force investigating the incident reviewed surveillance video from the hotel and noticed a woman wearing a light beret, black combat boots, and a dark backpack and carrying a small handheld camera, apparently photographing the incident. The woman was seen arriving at the same time as the protesters, standing outside the hotel with some in the group while other protesters entered the lobby, and leaving with or in the same general direction as the protesters. After watching video of earlier demonstrations and seeing a woman in similar clothing, and relying on two confidential informants, law officers identified the woman as the plaintiff.

Officials sought and received a warrant to search the plaintiff’s home and seize any items related to the IMF protest as well as clothing and virtually any device that would store video or photographs. Several items were seized, including a hard drive containing thousands of photos.

The plaintiff was never charged or arrested as a result of the investigation.

The plaintiff later filed a claim against the federal government and the officer who sought and obtained the search warrant alleging violations of the Privacy Protection Act, 42 U.S.C. § 2000aa et seq. 

Generally speaking, the PPA prohibits the federal government from conducting searches and seizing "any work product materials possessed by a person reasonably believed to have a purpose to disseminate to the public a newspaper, book, broadcast, or other similar form of public communication."  The law is designed to prevent, among other things, newsroom searches.

Congress enacted the PPA in response to a 1978 decision of the U.S. Supreme Court, Zurcher v. Stanford Daily, 436 U.S. 547, which held, essentially, that journalists have no more protection against unreasonable searches and seizures under the Fourth Amendment than do ordinary citizens. In Zurcher, the Supreme Court held that the Fourth Amendment did not prohibit the search of a newspaper office (the Stanford University student paper) for photos revealing the identities of people who assaulted police officers during a demonstration. This was so even though no one from the newspaper was suspected of involvement in the incident. 

However, the PPA does not give journalists unlimited protection against searches and seizures. Among the exceptions in the statue is the “suspect exception,” which the government relied on in Sennett. This exception provides that “police can avoid the constraints of the [statute] . . . when the person possessing the materials is a criminal suspect, rather than an innocent third party.”

The Fourth Circuit panel affirmed the lower court’s decision on summary judgment that officials had probable cause to believe, under the totality of the circumstances, that the plaintiff had committed a criminal offense relating to the hotel incident. For example, she arrived with the protesters at the hotel and left the scene with or in the same general direction as the protesters.  While there may have been an innocent explanation for the plaintiff’s actions at the hotel---she was covering the incident as a journalist---according to the Fourth Circuit, this did not eliminate the existence of probable cause under the governing totality of the circumstances test.

Moreover, while the plaintiff claimed that officials knew she was a photojournalist and failed to reveal this in the affidavit supporting their request for a search warrant, according to the Fourth Circuit panel, even if true this cannot destroy the existence of probable cause without more. Quoting the district court, “to accept [plaintiff’s] argument that her status as a photojournalist is a game changer in the probable cause analysis . . . is tantamount to doing what Congress declined to do, namely exclude journalists from the PPA’s ‘suspect exception.’”

While the plaintiff’s job explained her presence on the surveillance video, the court found that other facts permitted officers to reasonably conclude she was involved in the vandalism of the hotel.

Additionally, the fact that she was never charged did not defeat the existence of probable cause, which is judged at the time the search is conducted---not later.

Journalists and photographers should keep the Sennett case in mind when covering protests and demonstrations against the financial industry, some of which have allegedly turned criminal. While the PPA offers some protection from searches and seizures, the PPA does not immunize the media from searches where officers have probable cause to believe the journalists have committed or participated in criminal acts.  The Sennett case makes clear that, in the view of the Fourth Circuit panel, someone's status as a journalist does not automatically render him or her above suspicion in a criminal investigation.

Fourth Circuit Upholds Right to Publish Government Documents Containing SSNs

I’m going to devote a few posts over the next several weeks to some intriguing cases from 2010 that you might have missed.

One such case is a fascinating decision from the Fourth Circuit, Ostergren v. Cuccinelli, 615 F.3d 263 (2010), in which the Court found a Virginia statute making it unlawful to intentionally publish a person’s social security number over the Internet violated the First Amendment. Judge Duncan’s thoughtful and thorough analysis offers insight into how the Supreme Court’s holdings in Cox Broadcasting v. Cohn, Smith v. Daily Mail Publishing, and The Florida Star v. B.J.F., all hallowed First Amendment decisions affirming the right to publish freely available public information, ought to be applied in a digital age fraught with the risk of identity theft and intrusions upon personal privacy.

The plaintiff in Ostergren is a privacy advocate. One way in which she has chosen to spread her message is by publishing on her web site public land records that reveal the social security numbers of various public officials. Virginia began placing its land records online in the 1990s. Initially, clerks of court did nothing to redact social security numbers from these records. Subsequently, the Virginia legislature required attorneys who filed instruments for recordation to ensure that social security numbers were removed before filing.

In 2007, the legislature addressed the redaction of records already available online (original land records maintained in hard copy form are not redacted). However, the record in the case demonstrated that there is an approximately 3% error rate in the redaction process, which means that even after the process is complete, over a million online records can be expected to contain unredacted social security numbers. By 2008, 105 of Virginia’s 120 counties had completed the redaction process; those that had not finished continued to make all records available online.

Ostergren began advocating for reform in 2003 when she created her web site, and two years later she began her practice of publishing unredacted documents on that site. The controversy sparked by her web site led to the amendment of Section 59.1-443.2, which prohibited the intentional communication of a person’s social security number, to remove the exception for “records required by law to be open to the public.” After the Virginia Attorney General announced his intention to prosecute Ostergren under the amended statute, Ostergren brought suit under Section 1983, seeking to have the law declared unconstitutional under the First Amendment as applied to her publication of copies of public records lawfully obtained from the government.

The district court ruled in Ostergren’s favor and entered an injunction. On appeal, the Fourth Circuit affirmed the district court’s core holding under the First Amendment, while modifying the scope of its injunction.

The Fourth Circuit began by rejecting the categorical approach advanced by Virginia that social security numbers are unprotected speech that may be prohibited entirely. The Court held that “[g]iven her criticism about how public records are managed, we cannot see how drawing attention to the problem by displaying those very documents could be considered unprotected speech. Indeed, the Supreme Court has deemed such speech particularly valuable within our society.”

The Fourth Circuit then considered what level of scrutiny to apply to the Virginia statute’s regulation of protected speech. After a lengthy discussion of Cox Broadcasting, Daily Mail Publishing, and The Florida Star, the Court concluded that those decisions

make clear that Ostergren’s constitutional challenge must be evaluated using the Daily Mail standard. Accordingly, Virginia may enforce section 59.1-443.2 against Ostergren for publishing lawfully obtained, truthful information about a matter of public significance ‘only when narrowly tailored to a state interest of the highest order.’

Thus, strict scrutiny applied.

The Court then discussed the state’s interest protecting the disclosure of social security numbers. After providing an extensive history of the development of social security numbers and the risk of their misuse, the Court concluded that “Virginia’s asserted interest in protecting individual privacy by limiting SSNs’ public disclosure may certainly constitute ‘a state interest of the highest order.’” However, the Court went on to hold that it need not decide the question because it concluded, in any event, that Virginia’s restriction at issue was not narrowly tailored to the asserted interest.

In examining the question of narrow tailoring, the Court noted that the case involved a different conception of privacy than that present in Cox Broadcasting and The Florida Star. Those cases proceeded from a notion of privacy premised on secrecy, namely shielding from public view the fact that one had been the victim of rape. In Ostergren, on the other hand, secrecy was not at issue in the sense that a person is not embarrassed or humiliated, nor is their reputation harmed, by the revelation of his social security number. Instead, the privacy concern rests on ensuring proper use of and control over sensitive information, that if one’s social security number is revealed, unscrupulous persons may use the number for identity theft, bank fraud, and so on.

The Court noted another difference from the Cox Broadcasting and The Florida Star cases in that in those cases the disclosure was unintentional and could easily have been prevented. In Ostergren, the Fourth Circuit noted that it is much more difficult to ensure that not one of the millions of land records placed online contain an unredacted social security number.

Based on this analysis, the Court concluded that Virginia’s prohibition was not narrowly tailored to its asserted interest. In particular, the Court found that while the First Amendment does not necessarily require that each and every original land record be redacted before Ostergren may be prohibited from publishing them online in unredacted form,

the First Amendment does not allow Virginia to punish Ostergren for posting its land records online without redacting SSNs when numerous clerks are doing precisely that. . . . Virginia could curtail SSNs’ public disclosure much more narrowly by directing clerks not to make land records available through secure remote access until after SSNs have been redacted.

The court noted further that when documents with social security numbers slipped through the redaction process unaltered, “we leave open whether under such circumstances the Due Process Clause would not preclude Virginia from enforcing section 59.1-443.2 without first giving Ostergren adequate notice that the error had been corrected.”

On the strength of this sound analysis, the Fourth Circuit affirmed the district court’s holding that enforcement of Section 59.1-443.2 against Ostergren for posting the Virginia land records on her website would violate the First Amendment.

However, the Court went on to vacate the injunction entered by the district court on the grounds that its scope was both too narrow and too broad in certain respects. First, the Court rejected Ostergren’s argument that the injunction should protect her publication of non-Virginia public records that she had posted on her web site. Second, the Court found the injunction was too narrow in that it applied only to Virginia land records of public officials and did not include those of private individuals. Third, the injunction failed to cover Virginia land records posted by Ostergren concerning non-Virginia public officials.

To my knowledge, this is the first case to examine this issue. Look for more disputes to arise under the Cox Broadcasting/Daily Mail Publishing/The Florida Star line of cases as concern over privacy continue to clash with the public interest, embodied in the First Amendment, to permit the publication of publicly available government records.

Ohio Appellate Court Affirms Summary Judgment for Radio Station on Defamation and False Light Claims by Political Candidate

A panel of the Court of Appeals for the Fifth Appellate District in Ohio has affirmed a lower court’s grant of summary judgment in favor of an Ohio radio station in a defamation and false light invasion of privacy case involving a former candidate for judicial election. The Fifth District’s opinion in Christiansen v. WCLT et al. is linked here

Shortly before the November 2008 general election, radio station WCLT (Newark, Ohio) aired and posted to its website a political editorial in which the station’s general manager expressed his opinion that two of three candidates were inappropriate for the position of Domestic Relations Court Judge. One of the two candidates quickly sought an ex parte temporary restraining order to enjoin the editorial from further distribution (which was later denied) and filed a defamation complaint. Later, because certain of the statements the plaintiff contended were defamatory were by her own admission literally true, the plaintiff amended her complaint to also allege a claim for false light invasion of privacy. 

The statements in the editorial that the plaintiff challenged were these:

In July of 2007 a police report alleging assault was filed with the Newark Police Department against [the plaintiff]. In the report she is accused of striking a person in a courthouse elevator. She has also had several complaints concerning her behavior filed with the Ohio Supreme Court’s disciplinary counsel. 

The plaintiff admitted the statements were literally true, but claimed that the statements improperly created the inference that she had been charged with assault and disciplined by the Ohio Supreme Court’s disciplinary counsel – neither of which had happened. (The Fifth District’s opinion includes the full text of the editorial.)

On cross motions for summary judgment, the trial court denied the plaintiff’s motion and granted the radio station’s motion for summary judgment, finding that (1) the allegedly defamatory statements were not made with actual malice because the defendant believed them to be true (indeed, the plaintiff admitted they were literally true), (2) the statements were protected opinion, and (3) the statements could be construed as non-defamatory.

On appeal, the Fifth District, in a 2-1 decision, denied each of the plaintiff’s five assignments of error by the trial court. The court held that:

  • The lower court had not committed error by finding the allegedly defamatory statements to be literally true. 
  • The lower court properly applied the “innocent construction” rule to the statements. This rule requires that when an allegedly defamatory statement is subject to two interpretations, one defamatory and one not, the court must apply the non-defamatory meaning. 
  • The trial court did not err by finding that the factual statements made in the editorial were true and the rest of the editorial was protected opinion. 
  • The trial court properly held that the statements were not made with actual malice – knowledge of falsity or reckless disregard for the truth – because the statements were literally true. Actual malice could not be inferred from the plaintiff’s evidence of common-law malice or personal animosity.
  • The trial court properly distinguished the plaintiff’s defamation claim from her false light claim, as both causes of action require the plaintiff to prove actual malice. Since the court affirmed the finding that the statements were literally true, the plaintiff could not prove actual malice.

The appellate decision represents an important victory affirming the right of news organizations and others to engage in political speech during election campaigns.

Pyrrhic Victory in Convertino Case?

We have closely followed the twists and turns in Detroit Free Press reporter David Ashenfelter's efforts to avoid being forced to reveal his sources in the civil action against the Department of Justice brought by former federal prosecutor Richard ConvertinoThis spring, a federal judge in Michigan allowed Ashenfelter to invoke his rights under the 5th Amendment in order to avoid testifying under oath about his sources.

Last week, the collateral damage from Convertino's legal crusade continued to spread.  This time, Convertino was seeking some 736 DOJ documents that he claimed would provide him information as to the identity of the DOJ employee who presumably leaked to Ashenfelter information about the investigation into Convertino.

In a loss for Convertino that, ironically, also constitutes a loss for media interests, D.C. federal district court judge Royce Lamberth ruled last week that all 736 documents were protected from disclosure by a variety of privileges, including the deliberative process privilege.  In addition, in the same opinion, Judge Lamberth held that private emails sent by federal prosecutor Jonathan Tukel from his DOJ account were covered by the attorney-client privilege and need not be produced.

As to the first part of the opinion, the deliberative process privilege is, all too often, the exception to the Freedom of Information Act that swallows the rule.  It covers “advisory opinions, recommendations and deliberations comprising part of a process by which governmental decisions and policies are formulated.”  The privilege is easily used as a shield by government agencies to protect from disclosure all variety of internal documents that might otherwise be subject to public disclosure.  While Judge Lamberth's opinion did not appear to break any new ground here, it certainly confirmed the many ways that government employees can make disclosure of records more complicated.

The second part of the opinion was more interesting, as it discussed an area of some interest to open government advocates across the country -- the status of private emails sent from a government account.  In this case, Convertino argued that Tukel should not be able to invoke the attorney-client privilege for these 36 emails -- which were sent to or from his personal attorney -- because, by being sent through the government's server, they were, per se, revealed to a third party.  Convertino asserted that because DOJ email policy explicitly gave the Department the right to read any DOJ email, Tuker had no reasonable expectation of privacy in these emails.

Judge Lamberth disagreed, holding that "[o]n the facts of this case, Mr. Tukel’s expectation of privacy was reasonable. The DOJ maintains a policy that does not ban personal use of the company e-mail. Although the DOJ does have access to personal e-mails sent through this account, Mr. Tukel was unaware that they would be regularly accessing and saving e-mails sent from his account."

The ruling clearly rolls back the widely held view that what is done on government computers is presumptively the property of the government, and therefore the people.  Journalists in states with public records acts may now find themselves fighting in court for what was once assumed to be clearly public -- emails sent from government accounts by government employees.

Publication of Hacked Climate Emails Raises Legal, Policy Questions

The release of hacked emails written by well-known climate scientists has been widely reported around the world, as those emails have raised questions about whether the science behind global warming has been overstated.

This New York Times blog post by the paper's science reporter caused a mini-furor of its own in the blogosphere.  In the post, Andrew Revkin writes of the hacked emails:

The documents appear to have been acquired illegally and contain all manner of private information and statements that were never intended for the public eye, so they won’t be posted here.

While Revkin's statement was rather unclear, some critics wondered whether this constituted a new Times policy, one that was not in effect, for example, when the Pentagon Papers were published or when various leaked documents from the Bush Administration were published.  In a follow up to his post, Revkin points out that, from the beginning of the story, the Times has quoted the emails and provided links to other sites that have them posted.

Leaving aside the merits of the blogstorm in this case, the controversy does raise -- once again -- the question of how a media outlet should handle the receipt of documents that it has reason to believe were obtained illegally.

The answer, as a legal matter, is fairly simple.  Since the United States Supreme Court case of Bartnicki v. Vopper, the law is clear that when a media outlet lawfully obtains information from a third party -- even if the third party obtained it illegally -- publication of that material is protected by the First Amendment. In Bartnicki, which involved the broadcast of the contents of a cell phone call that had been illegally taped, the Court recognized the important government interest in protecting the privacy interests of the public at large, but held that, "[i]n this case, privacy concerns give way when balanced against the interest in publishing matters of public importance."

Of course, the answer would be different had the radio station that broadcast the tapes actually recorded the conversations itself.  The First Amendment does not immunize a reporter from his or her own illegal activity.  The answer might also be different if the disclosure did not concern a matter of public importance, or if the party releasing the material had some independent legal duty not to disclose it (as was the case in Boehner v. McDermott).

The policy question for media outlets is far more complicated.  Should the fact that the climate science emails contained "private" information give a media outlet legitimate pause before deciding to publish them?  Perhaps, though Andrew Revkin can probably tell you that deciding when to publish and when not to publish "private" information opens you up to charges of hypocrisy.

And yet, a blanket rule favoring publication may be problematic in some cases.  For example, to the degree any of these emails containing "private" information came from a government source, the Federal Privacy Act may be implicated.  Thus, if the government or a private individual pursues a Privacy Act action against whoever leaked the documents, the media outlet that received those documents may find itself being forced to reveal its sources (or facing the consequences for refusing to do so).

Check back later this week for a story from New Hampshire that implicates both Bartnicki and the developing case law on anonymous internet commentary.

Minnesota Court of Appeals Finds MySpace Posting Constitutes "Publicity Per Se"

A panel of the Minnesota Court of Appeals has ruled in an invasion of privacy case that a posting revealing certain private facts about a plaintiff constituted “publicity per se.”  Although the appellate court ultimately held that the lower court properly granted summary judgment on the invasion of privacy claims in favor of the defendants, the publicity aspect of the ruling is an important because it demonstrates how “old media” publication torts are being applied to new social media.

The plaintiff in Yath v. Fairview Clinics, N.P., Docket No. 27-CV-06-12506, slip op. (June 23, 2009), alleged that a medical assistant in a clinic she attended “snooped” in the plaintiff’s medical files without a proper purpose and discussed sensitive personal information she found in the files with another employee of the clinic, one of the defendants in the appeal.  The plaintiff also claimed that the employee-defendant, the medical assistant, and others published a MySpace web page about the plaintiff that publicized private information obtained from the her medical records—according to the MySpace page, the plaintiff had a sexually transmitted disease, recently cheated on her husband, and was addicted to plastic surgery. 

The plaintiff sued the employee-defendant, the medical assistant, the clinic (on a vicarious liability theory), and one other person for, among other claims, invasion of privacy based on publication of private facts.  By the time the matter reached the Court of Appeals, only the employee-defendant and the clinic were still in the case.

The lower court had granted summary judgment in favor of the two defendants on the invasion of privacy claim because the evidence showed that only a few people accessed the MySpace page in the 24 to 48 hours during which the page was live.

However, on review, the Court of Appeals held that the lower court had misapplied the law concerning “publicity” in invasion of privacy cases.  "Publicity" is a required element of the publication of private facts tort.

“Publicity,” for the purposes of an invasion-of-privacy claim, means that “the matter is made public, by communicating it to the public at large, or to so many persons that the matter must be regarded as substantially certain to become one of public knowledge.” In other words, there are two methods to satisfy the publicity element of an invasion-of-privacy claim: the first method is by proving a single communication to the public, and the second method is by proving communication to individuals in such a large number that the information is deemed to have been communicated to the public.

According to the appellate court, the lower court had incorrectly focused on the second prong of the publicity requirement—communication to a sufficiently large number of people—while ignoring the first prong.  Just as publication in a newspaper or a magazine of small circulation or in a radio broadcast would constitute “publicity,” so did the publication on MySpace in this case.  When information passes through a public medium like the Internet, the “publicity” requirement for invasion of privacy purposes is satisfied as soon as the information is disseminated.  “[T]he challenged communication here constitutes publicity under the first method, or publicity per se. . . . [Plaintiff’s] private information was posted on a public webpage for anyone to view.  This Internet communication is materially similar in nature to a newspaper publication or a radio broadcast because upon release it is available to the public at large.

The court’s ruling means, in effect, that the number of people who actually view a publicly available website is not relevant to the “publicity” requirement for invasion of privacy purposes.  The “publicity” occurs as soon as the information is made publicly available for anyone to view on the Internet.  However, as the appellate court acknowledged, the number of people who view such a website may be relevant when calculating the damages the plaintiff suffered (i.e., the more people who view the website, the greater the potential damages).

In reaching its ruling, the Court of Appeals took pains to put invasion of privacy in the context of our “Information Age":

That the Internet vastly enlarges both the amount of information publicly available and the number of sources offering information does not erode the reasoning leading us to hold that posting information on a publicly accessible webpage constitutes publicity.  If a late-night radio broadcast aired for a few seconds and potentially heard by a few hundred (or by no one) constitutes publicity as a matter of law, a maliciously fashioned webpage posted for one or two days and potentially read by hundreds, thousands, millions (or by no one) also constitutes publicity as a matter of law.

It is true that mass communication is no longer limited to a tiny handful of commercial purveyors and that we live with much greater access to information than the era in which the tort of invasion of privacy developed.  A town crier could reach dozens, a handbill hundreds, a newspaper or radio station tens of thousands, a television station millions, and now a publicly accessible webpage can present the story of someone‘s private life, in this case complete with a photograph and other identifying features, to more than one billion Internet surfers worldwide.  This extraordinary advancement in communication argues for, not against, a holding that the MySpace posting constitutes publicity.

The Pioneer Press has additional commentary on the case.

Missouri Court of Appeals Recognizes False Light Invasion of Privacy

In October 2008, we reported that the Florida Supreme Court rejected the false light invasion of privacy tort as a viable claim for relief under Florida law.  On December 23, 2008, the Missouri Court of Appeals went the opposite direction and held that Missouri does recognize false light invasion of privacy as an actionable tort. 

In Meyerkord v. Zipatoni Co., the Missouri Court of Appeals vacated and remanded the trial court's dismissal of a plaintiff's claim alleging that the defendant company, Zipatoni, had cast the plaintiff in a false light by failing to remove the plaintiff as the registrant of a certain website.  The plaintiff was a former employee of Zipatoni (a marketing firm) and was listed as the registrant for the company's account with  Three years after the plaintiff left the company, Zipatoni registered a certain marketing website through account listed the plaintiff as the website's registrant even though the plaintiff had nothing to do with the creation, registration, or marketing for the website.  The website ( was apparently used during a "viral marketing campaign" related to Sony's Play Station Portable.  The website and those associated with it, including the plaintiff, became the subject of "concern, suspicion, and accusations" in the online community.

The plaintiff filed a complaint against Zipatoni alleging false light invasion of privacy.  The complaint claimed that the content of the website was "'publicly attributed'" to the plaintiff and that "his 'privacy had been invaded, his reputation and standing in the community had been injured, and he has suffered shame, embarrassment, humiliation, harassment, and mental anguish."  The trial court dismissed the complaint because no Missouri court had previously recognized the false light invasion of privacy tort.

In reaching its conclusion to vacate and remand the case, the Court of Appeals reasoned that Missouri had long recognized a cause of action for "invasion of privacy," the umbrella term for four different torts:  intrusion on seclusion, misappropriation of likeness, public disclosure of private facts, and false light.  See, e.g., Restatement (Second) of Torts, Section 652(A)-(E).  The Court of Appeals also acknowledged that Missouri courts had never explicitly recognized a cause of action for false light.  However, the Court of Appeals also reasoned that the Missouri Supreme Court left open the possibility that false light could be recognized in the future.  In Sullivan v. Pulitzer Broadcasting Co., 709 S.W.2d 475 (Mo. 1986) (en banc), a decision that declined to recognize false light based on the facts presented, the Missouri Supreme Court wrote, "[i]t may be possible that in the future Missouri courts will be presented with an appropriate case justifying our recognition of the tort of 'false light invasion of privacy.'  The classic case is when one publicly attributes to the plaintiff some opinion or utterance, whether harmful or not, that is false, such as claiming that the plaintiff wrote a poem, article or book which plaintiff did not in fact write." 

In Meyerkord, the Court of Appeals noted that a majority of jurisdictions that have confronted the issue of whether or not to recognize false light as a separate actionable tort have chosen affirmatively to recognize the tort (the court cited 27 jurisdictions), whereas a minority of jurisdictions have refused to recognize false light (the court cited 8 jurisdictions).  According to the Court of Appeals, the jurisdictions that have rejected false light have done so primarily due to three concerns:  (1) the protection provided by false light duplicates or overlaps interests already protected by defamation, (2) recognizing false light would increases tension with the First Amendment to the extent false light allows recovery beyond that allowed for defamation, and (3) recognizing false light would require courts to consider two claims for nearly identical relief.  The Meyerkord decision addressed each of these concerns as follows:

  • False light is "sufficiently distinguishable" from defamation.  Under defamation law, "the interest sought to be protected is the objective one of reputation, either economic, political, or personal, in the outside world."  On the other hand, the interest protected by false light "is the subjective one of injury to the person's right to be let alone."  Additionally, the marketplace of ideas operates to alleviate defamation injuries, while the marketplace intensifies the injuries that flow from false light.
  • The First Amendment concerns attendant to recognition of false light are lessened by adopting a heightened standard of fault, such as actual malice---knowledge of falsity or reckless disregard for the truth---or recklessness.  
  • The heightened actual malice standard also alleviates concerns related to judicial economy.  Moreover, the requirement that a plaintiff must prove the complained of statement is "highly offensive to a reasonable person" decreases the possibility of excessive litigation over false light claims.

The Court of Appeals wrote:

As noted earlier, the Missouri Supreme Court has considered the issue of whether Missouri courts should adopt the tort of false light invasion of privacy, but the Supreme Court concluded it had not yet been confronted with a factually suitable case. We now find that the facts of the present case properly present the issue of false light invasion of privacy and we hold that a person who places another before the public in a false light may be liable in Missouri for the resulting damages. In recognizing this cause of action, we note that as a result of the accessibility of the internet, the barriers to generating publicity are quickly and inexpensively surmounted. . . . Moreover, the ethical standards regarding the acceptability of certain discourse have been diminished. Thus, as the ability to do harm grows, we believe so must the law's ability to protect the innocent.

In so ruling, the Court of Appeals adopted the Restatement (Second) of Tort's formulation of false light invasion of privacy, which requires a plaintiff to show: (1) the false light in which the plaintiff was placed would be highly offensive to a reasonable person, and (2) the defendant had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the plaintiff would be placed.  The Court of Appeals expressly adopted the actual malice standard for all false light claims, whether they involve public officials, private individuals, public matters, or private matters.

Turning to the facts of the case, the Court of Appeals determined that the plaintiff had adequately alleged that the viral marketing website was publicly attributed to him and that the misrepresentation was highly offensive to a reasonable person.  However, the plaintiff had failed to adequately allege the actual malice standard of fault, so the trial court had not erred in dismissing the complaint.  The Court of Appeals vacated the trial court's decision and remanded the case to allow the plaintiff an opportunity to amend his complaint and plead actual malice.

It is important to recognize that the Meyerkord decision was issed by the Court of Appeals, which is an intermediate state appellate court.  The Missouri Supreme Court has not yet had an opportunity to rule definitively that Missouri courts recognize false light invasion of privacy as a separate actionable tort.  The juxtaposition of the Meyerkord case and the Rapp case out of Florida also underscores that the status of invasion of privacy torts, and particularly the status of the false light invasion of privacy tort, remains fluid across U.S. jurisdictions.  We will keep you apprised as other states address this issue.

Florida Supreme Court Rejects False Light

The Supreme Court of Florida yesterday issued two opinions holding that Florida law does not recognize the false light invasion of privacy tort.  These outcomes constitute significant wins for media defendants in a state where the existence of false light as a viable state-law claim has been hotly debated. 

Rapp v. Jews for Jesus, Inc. involved statements made by the plaintiff’s stepson in a newsletter that suggested the plaintiff had joined or was a believer in the Jews for Jesus philosophy.  Essentially, the plaintiff argued in the underlying proceedings that, while literally true, the statements created a false impression of her, and she brought claims for false light invasion of privacy, defamation, and intentional infliction of emotional distress based upon the statements.


The court rejected the plaintiff’s position following a thorough comparison of the elements of and interests at stake in false light and defamation claims.

We once again acknowledge that it is our duty to ensure the “protection of the individual in the enjoyment of all of his inherent and essential rights and to afford a legal remedy for their invasion.” However, because the benefit of recognizing the tort, which only offers a distinct remedy in relatively few unique situations, is outweighed by the danger of unreasonably impeding constitutionally protected speech, we decline to recognize a cause of action for false light invasion of privacy.

On the same day it released Rapp, the Supreme Court of Florida also released Anderson v. Gannett. Like Rapp, Anderson involved false light invasion of privacy and defamation claims based on the same set of facts.  The question before the court in Anderson was the applicable statute of limitations for false light claims, but the court dismissed the question as moot given its holding in Rapp.


False light is one of the four branches of the common-law invasion of privacy tort.  In states that recognize false light as viable claim, a plaintiff must generally show that a defendant disseminated some highly offensive false publicity about an identified person with knowledge of or reckless disregard for the falsity of the statement.  The elements are derived from the Restatement (Second) of Torts, 652(E).


With these two decisions, Florida joins a number of other states in rejecting false light as a permissible state-law claim.

Continue Reading...