Physician Practices Be Alert: You Might Be Violating HIPAA If You Produce Medical Records In Response To State Court Subpoenas

Over the past months, my experiences with physician practices have made me realize that many practices do not understand how HIPAA applies to subpoenas for medical records.  More worrisome, I suspect that many practices nationwide routinely violate HIPAA when they receive a subpoena.

Here’s what I’ve observed:  Practices receive state court subpoenas that are signed by lawyers and that demand the production of medical records, and the practices automatically assume they must produce the records.  This is a dangerous assumption—the production of the records may very well violate HIPAA.

Here’s what HIPAA requires in general terms:  If a practice receives a state court subpoena for medical records that is signed by a lawyer, the practice should not produce the records unless (1) the practice also receives (a) a court order requiring production, (b) a HIPAA “qualified protective order” that’s been entered in the lawsuit, (c) a HIPAA compliant authorization from the patient that authorizes the disclosure demanded by the subpoena, or (d) certain other matters designated by HIPAA’s rule concerning subpoenas, or (2) the practice takes certain additional actions required by HIPAA’s rule for subpoenas. 

If a practice receives such a subpoena without receiving any of these “additional” items or taking these “additional” actions, the practice will likely violate HIPAA if the records are produced.

Here’s what practices should do.  Because this area of HIPAA is somewhat complex and difficult for practices to navigate on their own, practices should consult with legal counsel when they receive such a subpoena.  Legal counsel can advise whether HIPAA permits the disclosure, whether the practice needs to object to the subpoena, and whether other actions should be taken.  On numerous occasions, we have reviewed such subpoenas, determined that they did not comply with HIPAA, and sent a letter objecting to the subpoena, and the practice never heard from the parties again. 

Take away:  If you receive a state court subpoena signed by a lawyer demanding the production of medical records, do not automatically produce the medical records.

BYOD: Five Things To Consider When Creating Your Policy

“BYOD” or “bring your own device” (also known as the “consumerization of IT”) is a fact of life in today’s workplace. BYOD refers to the practice of using personally owned devices—like smartphones, tablets, and laptops—for work purposes and allowing these devices to connect to company networks and applications. According to a Gartner study released in late 2014, 40% of U.S. employees working for large companies use personally owned devices for work purposes. Of those who reported using personally owned devices, only 25% were required by their employers to do so.  And of the 75% who chose to use their personally owned devices for work, half did so without their employers’ knowledge.

If that last statistic doesn’t alarm you a little, it should. 

BYOD can be great for productivity, but it also creates risk for companies. Why? Because personally owned devices are not managed or controlled by company IT, and security of the device is in the hands of the employee. In other words, these devices can be a source of company data loss or data leakage. For example, if an unencrypted personal laptop with company data on it gets lost or stolen, you may have a data breach on your hands. 

So how do you address these concerns?  Start with a written, employee-acknowledged BYOD policy. 

Here are five things to consider as you develop your policy:

1.      Start by building an interdisciplinary team to create the policy. The team should include IT, human resources, legal, compliance, and employees who use their personally owned devices. BYOD is not just an IT issue but also a legal and business issue. Different perspectives will help lead to a policy that fits your organization’s needs and is capable of being followed.

 

2.      Develop the goals of the policy. Security should be a goal but productivity is also important. Cost savings may also be an objective. These, and other goals, may be in tension at times. In the end, you want to develop a policy that strikes the right balance for your company. Consider a BYOD policy that ensures the only way enterprise data flows into or out from a device is through enterprise systems (e.g., avoid emailing business information to employee personal email accounts).

 

3.      Determine which employees are covered and how they are covered. There may be different policies for different types of employees based on job type or function. For example, the policy for exempt employees may be different than the one for non-exempt employees. Consider whether all employees need to be able to use tablets (for example) to access corporate data for work purposes. Be sure to consult with counsel about employment laws and regulations that may apply in this area.

 

4.      Decide which devices/uses are permitted and how the policy applies to each. For example, the BYOD team should conduct “app reviews” to decide what apps are okay for business use and what apps are not allowed.  Particular types of smartphones, tablets, or laptops may not meet the company’s security requirements and, if not, should not be permitted. Also, the policy for smartphones may be different than the policy for laptops because they are different devices used in different ways and may pose different security risks.

 

5.      Build in training so that each employee knows the policy and how it applies to him or her. In the end, security is about people just as much as it is about technology.

 

This isn’t an exhaustive list of considerations, but it will help get you started crafting a BYOD policy tailored to your company. It’s important that you do so, if you haven’t already.

 

You are What You Keep

Suffering a data breach is bad enough. As often as it appears to happen, companies that are affected by a breach still shoulder a considerable burden. Management must stop the trains to identify the cause and scope of the breach—and then prepare for the aftermath. Lawyers are involved. The company’s brand is at risk. And the costs—employee time, legal fees, security consultants—quickly escalate.

But what if you determine that your company didn’t really need the information that was exposed? Suppose you find out that the breach involved a file that contained drivers’ license numbers or even credit card information, but your company had virtually no administrative need for that information? Or suppose the data pertained to transactions by young adults 7 years ago – and it is highly unlikely that any of the information is still relevant (much less accurate). This is the kind of discovery that lends insult to injury. Your company is forced to stop and invest significant funds to respond to a data breach relating to data that you don’t have any use for anymore for its marketing or operational efforts.

The surest way to avoid this problem is to review and assess the way you currently collect, retain, and store information. Here are a few items to consider:

·       Collection – Do you really need all of the personal information that you are collecting from consumers? Review your intake procedure and revise it to collect only what you need for operational or marketing purposes. Also, are you even aware of all of the different portals through which your company may be collecting data from consumers? Be sure you’ve done that so that you can assure that you are doing a full assessment. Do you have someone in your organization responsible for tracking the types of data you are collecting and the different processes through which you are collecting the data?

·       Retention – How long are you storing personal information? And for what purposes? Are your practices consistent with PCI standards? What is your current retention policy and are you following it? There are federal and state laws that may govern the retention, disposal or destruction of your data. Be familiar with those laws. Within the confines of applicable laws, be sure you are not holding on to unnecessary or outdated data that would cause you intolerable frustration in the event it was breached. Do you have someone in your organization responsible for overseeing retention and disposal?

·       Third Party Partners and Vendors – If you are sharing personal information with other parties (which, of course, needs to be disclosed to consumers in your privacy policy), be sure that your agreements with those parties contain appropriate safeguards. Are you requiring your vendors to secure personal information and prohibit the disclosure of that information? What happens in the event of the breach? Who bears the cost of notification? Are you vendors required to indemnify you if their mistakes lead to actions against your organization?

 

There is a simple rule that applies in a data breach: You are what you keep. So be careful with what information you currently collect and retain. Talk to your lawyer about whether certain information that you may consider to be “stale” may be properly and legally disposed. And, more importantly, consider revising your practices going forward so you don’t continue to collect or retain any stale or unnecessary information going forward.

SEC Releases Results of Cybersecurity Exam Sweep

Ed. Note: This entry is cross posted from Cady Bar the Door, David Smyth's blog offering Insight & Commentary on SEC Enforcement Actions and White Collar Crime.

We’re behind on this, but better (a little bit) late than never. Last month the SEC’s Office of Compliance, Inspections and Examinations released the first results of its Cybersecurity Examination Initiative, announced in April 2014 (and discussed here). As part of the initiative, OCIE staff examined 57 broker-dealers and 49 investment advisers to better understand how these entities “address the legal, regulatory, and compliance issues associated with cybersecurity.”  

What the Exams Looked For

In the exams, the staff collected and analyzed information from the selected firms relating to their practices for: ♦ identifying risks related to cybersecurity; ♦ establishing cybersecurity governance, including policies, procedures, and oversight processes; ♦ protecting firm networks and information; ♦ identifying and addressing risks associated with remote access to client information and fund transfer requests; ♦ identifying and addressing risks associated with vendors and other third parties; and ♦ detecting other unauthorized activity.

Importantly, the report is based on information as it existed in 2013 and through April 2014, so it’s already somewhat out of date.

 

The Good News

The report includes some good news about how seriously the SEC’s registered entities are taking cybersecurity.

·       The vast majority of examined broker-dealers (93%) and investment advisers (83%) have adopted written information security policies.

·       The vast majority of examined broker-dealers (93%) and investment advisers (79%) conduct periodic risk assessments to identify cybersecurity threats, vulnerabilities, and potential business consequences.

·       The vast majority of examined firms report conducting firm-wide inventorying, cataloguing, or mapping of their technology resources. Smart.

·       Many firms are utilizing external standards and other resources to model their information security architecture and processes. These include standards published by National Institute of Standards and Technology (“NIST”), the International Organization for Standardization (“ISO”), and the Federal Financial Institutions Examination Council (“FFIEC”).

Encouraging! But the report didn’t bring all good tidings.

 

The Bad News

Here are some of the less auspicious facts:

           

·       88% of the broker-dealers and 74% of the advisers reported being the subject of a cyber-related incident.

·       Most of the broker-dealers (88%) require risk assessments of their vendors, but only 32% of the investment advisers do. 

·       Related to that, most of the broker-dealers incorporate requirements relating to cybersecurity risk into their contracts with vendors and business partners (72%), but only 24% of the advisers incorporate such requirements. Fewer of each maintain policies and procedures related to information security training for vendors and business partners authorized to access their networks.

·       A slight majority of the broker-dealers maintain insurance for cybersecurity incidents, and only 21% of the investment advisers do.    

 

The Rest

Almost two-thirds of the broker-dealers (65%) that received fraudulent emails seeking to transfer funds filed a Suspicious Activity Report with FinCEN, as they’re likely required to do. The report then notes that only 7% of those firms reported the incidents to other regulators or law enforcement. It’s curious to me why the SEC would expect other reports to happen. With the SAR obligations in place, those firms probably, and reasonably, think all the necessary reporting has been done after the SAR has been filed. Also, these firms’ written policies and procedures generally don’t address whether they are responsible for client losses associated with cyber incidents.  Along these lines, it might be that requiring multi-factor authentication for clients and customers to access accounts could go a long way toward pushing responsibility for those losses on the users. 

But don’t take my word for it. Read the report yourself, linked above and here.

FTC Commissioner Comments on Consumer Privacy Bill of Rights

Last week, we posted about the Consumer Privacy Bill of Rights “discussion draft” released by the Obama Administration. On Thursday, March 5, at the annual U.S. meeting of the International Association of Privacy Professionals (which I attended), FTC Commissioner Julie Brill answered questions about her take on the bill and other policy issues. Here are just a few comments from that discussion that merit a follow-up post:

  • Commissioner Brill stated in no uncertain terms that the draft bill is not protective enough of consumers. At various times, she said there are “serious weaknesses in the draft,” “there’s no there there,” it needs “some meat,” and “where are the boundaries?” She mentioned a specific example of a more consumer-protective approach relating to consent to certain data practices. She indicated she would like to see the bill require affirmative express consent of the individual for (a) material retroactive changes to a privacy statement and (b) use of sensitive information out of context of the transaction in which it was collected.
  • Although Section 402 of the draft bill provides that the FTC’s unfair and deceptive authority under Section 5 of the FTC Act remains intact, Commissioner Brill expressed concern that it is not clear enough in the bill that the FTC would retain its full authority to enforce the “common law” of privacy as developed in its prior enforcement actions. 
  • Will anything come of the bill this Congress? Commissioner Brill said she expects it’s unlikely given other legislative priorities at this time.

Commissioner Brill commended the Administration for grappling with tough issues and working to improve privacy overall. However, hers is one more voice calling for additional work on broad federal consumer privacy legislation.

Consumer Privacy Bill of Rights Act - A Mixed Bag

Late last week, President Obama released a “discussion draft” of the Administration’s long awaited Consumer Privacy Bill of Rights Act.  At first blush, the results are a mixed bag:  some good, some not so good, much work among stakeholders left to be done.

It didn’t take long for consumer advocates, and even one FTC Commissioner, to say the draft legislation doesn’t go far enough.  The Internet has been rife with posts this week about the bill’s problems and shortcomings.  In summary, for most, the bill landed like a lead balloon.

Still, the Administration released the bill as a “discussion draft”—signaling the draft legislation is a just a step and an invitation for further conversation.  For a measured perspective considering the bill through this lens, read former Obama Administration official Nicole Wong’s thoughtful article

While it’s certainly far from perfect, my take is that the bill isn’t all bad.  Here are just a few initial pros and cons to the bill that I’ve identified (in no particular order):

  • Pro:  many principles are based on fair information practices familiar from existing federal statutes, flexibility and consideration of measures that are reasonable in context, availability of safe harbor protections, exceptions for de-identified data, delayed enforcement to allow parties time to adjust to the law’s requirements.
  • Con: loosely defined requirements, definitional uncertainty, preemption and enforcement concerns.

One item of note is that the security provisions in Section 105 (a) codify, at a very high level of generality, some of the principles that we’ve been advising our clients about:  for example, taking steps to identify internal and external risks to privacy and security of personal data and implementing and regularly assessing safeguards to control risks.  (Of course, it’s a separate thing all together to have recommendations take on the force of law.)

In the end, it may have been inevitable that this bill would be a disappointment to some.  After all, the public has been waiting on it since 2012.  During that time, there have been many, many high-profile breaches of consumer information.  The appetite for more privacy and security protections has only grown over time.  But it will take a delicate balance to provide desired protections while at the same time making legal requirements workable for both consumers and the businesses offering products and services consumers want.

To be sure, there will be more to come from the Consumer Privacy Bill of Rights—stay tuned.

Two-Factor Authentication May Be Coming to a Bank Near You

Ed. Note: This entry is cross posted from Cady Bar the Door, David Smyth's blog offering Insight & Commentary on SEC Enforcement Actions and White Collar Crime.

When I was at the SEC and online broker-dealers’ customers were the victims of hacking incidents, I used to wonder, why don’t the broker-dealers require multi-factor authentication to gain access to accounts? It was a silly question. I knew the answer. Multi-factor authentication is a pain and nobody likes it.

Do you know what it is? Here’s what Wikipedia says, so it must be true:

Multi-factor authentication (MFA) is a method of computer access control which a user can pass by successfully presenting authentication factors from at least two of the three categories:

·       knowledge factors (“things only the user knows”), such as passwords

·       possession factors (“things only the user has”), such as ATM cards

·       inherence factors (“things only the user is”), such as biometrics.

The idea is, hackers might figure out your password, but they won’t be able to figure out a number that changes every 30 seconds on a card you carry or on your cell phone. They won’t be able to replicate your fingerprint. That’s the idea, anyway. Brokers and banks have been loathe to require multi-factor authentication because it’s inconvenient and customers often hate it.

But here comes Ben Lawsky, the Superintendent of New York’s Department of Financial Services, who just unveiled a number of proposals to increase cybersecurity at banks under his jurisdiction. One of these is to require that banks use multi-factor authentication. This move could take a lot of the economic pressure off banks that would otherwise like to implement this control for its customers, but have been unwilling to do so for fear of losing those customers to rivals. If everybody has to do it, there’s not a lot of fear from imposing it unilaterally.

That’s not all Lawsky has in mind. His proposal also includes:

·       requiring senior bank executives to personally attest to the adequacy of their systems guarding against money laundering;

·       ensuring that banks receive warranties from third-party vendors that those providers have cybersecurity protections in place;

·       random audits of regulated firms’ transaction monitoring systems, meant to catch money laundering; and

·       incorporating targeted assessments of those institutions’ cybersecurity preparedness in its regular bank examinations.

Lawsky’s proposals could be a big deal. Stay tuned.

Big Announcement, Small UAS: FAA Launches Commercial Drone Proceeding

            Unless you have been completely disconnected from all media, you are probably already aware that on Sunday, February 15, 2015, the FAA announced the release of its long-awaited rules to govern commercial sUAS (small unmanned aircraft systems) operations in the United States. The FAA’s proposed sUAS rules arrived like a barely-late valentine or box of candy, with the recipients hoping to read loving prose and enjoy fresh, rich chocolates. At this point, of course, the rules are merely a proposed regulatory regime (as embodied in a document that is called a “Notice of Proposed Rulemaking” or “NPRM”), and it will surely take many months—probably a couple of years—for the rules to be finalized and adopted, and to go into effect. (Only then will we know for sure whether the valentine message was really a “dear John” letter or whether the candy was stale and half-eaten.) It is important to understand that, for now, the FAA’s current prohibition on commercial UAS operations remains in effect, except for operators that have obtained a Section 333 Exemption from the FAA. (To date, nearly 30 entities have received exemption grants from the FAA.)

            The proposed regulatory regime was described by the FAA on a February 15 press conference call as a “very flexible framework” that will “accommodate future innovation in the industry.” While industry stakeholders may ultimately disagree over just how flexible the proposed rules are or should be, stakeholders do generally agree that the FAA’s release of the NPRM is a big step (albeit somewhat overdue) in the right direction. You can access the FAA’s NPRM here. (The FAA also published a “fact sheet” as well as a short summary of the highlights of the proposed rules.)

Proposed sUAS Rules

            Among the various limitations that the FAA has proposed for commercial sUAS operations are the following (caveat: this is neither an exhaustive nor detailed list of all the operational limitations and requirements proposed in the NPRM):

  • Vehicles subject to the sUAS rules will be defined as aircraft that weigh less than 55 pounds (25 kg)
  • Only visual line-of-sight (“VLOS”) operations will be allowed; i.e., the small unmanned aircraft must remain within VLOS of the operator or visual observer (“VO”) (i.e., if a VO is used; the proposed rules allow—but do not require—the use of a VO)
  • No person may act as an operator or VO for more than one unmanned aircraft operation at one time
  • Pilots of sUAS will be considered “operators.” Operators will be required to:
    • Pass an initial aeronautical knowledge test at an FAA-approved knowledge testing center;
    • Be vetted by TSA (Transportation Security Administration);
    • Obtain an unmanned aircraft operator certificate with an sUAS rating (like existing pilot airman certificates, it will never expire);
    • Pass a recurrent aeronautical knowledge test every 24 months;
    • Be at least 17 years old;
    • Make available to the FAA, upon request, the sUAS for inspection or testing, and any associated documents/records required to be kept under FAA rules;
    • Report an accident to the FAA within 10 days of any sUAS operation that results in injury or property damage;
    • Conduct preflight inspections to ensure the sUAS is safe for operation.
  • At all times the small unmanned aircraft must remain close enough to the operator for the operator to be capable of seeing the aircraft with vision unaided by any device other than corrective lenses (the use of binoculars would not satisfy this restriction)
  • sUAS operations may not occur over any persons not directly involved in the operation
  • sUAS operations must occur during daylight hours only (official sunrise to official sunset, local time)
  • Small unmanned aircraft will be required to yield right-of-way to other aircraft, manned or unmanned
  • Small unmanned aircraft will be allowed to operate with a maximum airspeed of 100 mph (87 knots)
  • Small unmanned aircraft will be allowed to operate at a maximum altitude of 500 feet above ground level
  • sUAS operations will be permitted to occur only when conditions allow minimum weather visibility of 3 miles from the control station
  • Limitations in airspace classes:
    • No sUAS operations will be allowed in Class A (18,000 feet & above) airspace
    • sUAS operations will be allowed in Class B, C, D and E airspace only with ATC (Air Traffic Control) permission
    • sUAS operations in Class G airspace will be allowed without ATC permission

Many of these requirements dovetail with (or are at least similar to) the limitations, requirements, and restrictions that have been imposed by the FAA in its various Section 333 Exemption decisions. In fact, some of the rules proposed in the NPRM would be less restrictive and more flexible than those imposed on operators in certain Section 333 Exemption decisions.

“Micro” UAS and Model UAS

        NPRM proposes a “micro” UAS classification that contemplates operations of small unmanned aircraft that weigh up to 4.4 pounds (2 kg; small-scale sUAS and hence “micro”), only in Class G airspace, only during daylight hours, at altitudes no higher than 400 feet AGL. Micro UAS operations would be permissible over people not involved in the operation of the unmanned aircraft, provided the operator certifies he or she has the requisite aeronautical knowledge to perform the operations. Other, additional restrictions, as set forth in the NPRM, would apply to the proposed micro UAS classification.

With respect to hobbyists who fly model unmanned aircraft for recreational purposes, the NPRM does not propose to change the rules of the road for such hobbyists, so long as their operation of model UAS satisfies all of the criteria applicable to model unmanned aircraft.

Presidential Memorandum: UAS Privacy Framework 

           In addition to—and presumably in concert with—the FAA’s release of the NPRM, President Obama on the morning of Sunday, February 15, issued a Presidential Memorandum aptly titled “Promoting Economic Competitiveness While Safeguarding Privacy, Civil Rights, and Civil Liberties in Domestic Use of Unmanned Aircraft Systems” (“UAS Privacy Memorandum”) in order to create a framework to begin to address some of the privacy concerns that have been voiced by the American public (and even by a U.S. Supreme Court Justice). (Reports had surfaced months ago suggesting that the UAS Privacy Memorandum was in the works.)

            Among other things, the UAS Privacy Memorandum requires federal agencies, “prior to deployment of new UAS technology and at least every 3 years, [to] examine their existing UAS policies and procedures relating to the collection, use, retention, and dissemination of information obtained by UAS, to ensure that privacy, civil rights, and civil liberties are protected. Agencies shall update their policies and procedures, or issue new policies and procedures, as necessary.” In addition, federal agencies must “establish policies and procedures, or confirm that policies and procedures are in place, that provide meaningful oversight of individuals who have access to sensitive information (including any PII [personally identifiable information]) collected using UAS . . . [and] require that State, local, tribal, and territorial government recipients of Federal grant funding for the purchase or use of UAS for their own operations have in place policies and procedures to safeguard individuals’ privacy, civil rights, and civil liberties prior to expending such funds.” These requirements represent a logical and reasonable starting point for ensuring privacy protection at the federal level if and when agencies engage in UAS operations.

            Also of significance, the UAS Privacy Memorandum tasks NTIA (the National Telecommunications and Information Administration) to initiate, by mid-May 2015, a “multi-stakeholder engagement process to develop a framework regarding privacy, accountability, and transparency for commercial and private UAS use.” As such, the UAS Privacy Memorandum provides a formal structure in which various aspects of privacy that will be implicated by commercial and private UAS operations can and will be debated and addressed. As with the sUAS rules themselves, only time will tell whether the final results of the UAS Privacy Memorandum are weak or strong, satisfactory or dissatisfactory to UAS stakeholders, including the American public generally.  

 Issuance of the NPRM Doesn’t Mean You Can Ignore State Law

            State and local jurisdictions continue to contemplate legislation governing the use of UAS by individuals, commercial entities, and law enforcement, and much remains to be written about such ongoing efforts. As I’ve written previously, the State of North Carolina enacted several provisions governing UAS in 2014. Until the FAA’s rulemaking results in final rules, many of the North Carolina provisions are of little practical significance. Nevertheless, drone enthusiasts in North Carolina must remain mindful of these state-specific laws, as should any UAS operator in any other state with applicable laws in place.

More to Come

            For many industry stakeholders and drone enthusiasts, the release of the NPRM surely represents a “Harry Potter” moment: when a new Harry Potter book hit the bookshelves, people would line up for hours to get a copy and would stay up all night and skip school or work in order to read it. I’m certain that a similar phenomenon has been underway since Sunday, February 15, when the NPRM first became available on the FAA’s website. (I would hazard a guess that the FAA’s website had more traffic on February 15 than any other day in the history of www.faa.gov.) While I don’t recommend that anyone play hooky from work or school in order to read the 195-page NPRM, I do encourage you to celebrate President’s Day (February 16) by reviewing the NPRM—while the FAA Staff enjoys its well-deserved federal holiday—and I do wish you all happy reading and sweet, post-valentine dreams. 

            Onward and upward (but, until the FAA issues final rules, not without an exemption or not more than 400 feet, please, with a model drone used solely for recreational purposes)! 

 

Sony Employees Sue, Calling the Breach an "Epic Nightmare"

by Bryan Starrett, Employment Law Attorney, bstarrett@brookspierce.com

You have probably heard about the recent data breach at Sony; after all, it’s not often that Kim Jong Un and Angelina Jolie are mentioned as part of the same story. Unlike other recent high profile hacks, the recent Sony hack appears to be somewhat different in character: the hackers appear to care most about using the information stolen from Sony to bring shame and scorn to the company, rather than for their own pecuniary gain.

And the story appears to continue down the proverbial rabbit hole, with reports of a tongue-and-cheek offer of investigative cooperation from the North Koreans, and the recent revelation that all of North Korea’s internet is down, perhaps in retaliation for the recent attacks.

Amidst the intense Hollywood and international intrigue, an important group of victims isn’t receiving much attention: Sony employees. Indeed, the hack has allegedly resulted in the theft of social security numbers, birth dates, health information, and other sensitive data from thousands of Sony employees. In response, two Sony employees swiftly filed a federal class-action lawsuit against their employer, summing up their claims in the opening paragraph of their complaint:

An epic nightmare, much better suited to a cinematic thriller than to real life, is unfolding in slow motion for Sony’s current and former employees: Their most sensitive data, including over 47,000 Social Security numbers, employee files including salaries, medical information, and anything else that their employer Sony touched, has been leaked to the public, and may even be in the hands of criminals.

The employees have brought claims for negligence, as well as for various statutory data breach claims under both California and Virginia law.

Unlike the more “typical” breach case, where customers are the victims of stolen credit card numbers or other personal information, this action is unique in at least two critical aspects: the nature of the data breached, and the employer/employee relationship between Sony and the plaintiffs. Employers often owe their employees heightened duties of care to their employees, thanks to the particular nature of the employer/employee relationship. However, the duties employers owe their employees regarding the protection of employee data is largely uncharted territory, and this action may shed significant light on the standard to which employers will be held in protecting employee data.

HHS Settlement Shows: "You'd Better Implement Those IT 'Patches' and 'Updates' or Be Ready to Pay the Price."

by Forrest Campbell, Health Law Attorney, fcampbell@brookspierce.com 

In December 2014, the U.S. Department of Health and Human Services ("HHS") and Anchorage Community Mental Health Services ("ACMHS") settled alleged HIPAA violations for $150,000.

Don't be misled--this settlement is not important just for parties subject to HIPAA. It's important to anyone who maintains confidential information in electronic form.

Here's what happened according to HHS. ACMHS failed to regularly update its IT resources with available patches, and ACMHS used outdated, unsupported software. As a direct result of these two factors, malware was able to compromise the security of ACMHS's IT system, resulting in a data breach of the protected health information of 2,743 individuals. As HIPAA requires, ACMHS notified HHS of the breach, and an HHS investigation followed. The investigation led to the settlement. The period from the start of the investigation to the signing of the settlement was 2 ½ years--which probably represents a lot of hours and money for ACMHS.

These events show how important security patches and software updates are for all parties with confidential electronic information. If you fail to diligently implement patches and updates--no matter what business line you're in--malware might infiltrate your IT system and cause a data breach. Data breaches often require notice to the individuals affected and to state and federal authorities, and often lead to investigations, lawsuits, and/or settlements.

Apparently, ACMHS could have avoided the entire matter if it had implemented proper patches and updates.

Although the lessons from these events are important across all industries, parties subject to HIPAA should recall that the HIPAA security rule essentially mandates that critical security patches and updates be implemented. For example, the security rule broadly requires that HIPAA covered entities and business associates must: