FIN4 May Have Embarked on a Risky Hacking/Insider Trading Strategy

I haven’t yet turned to a life of crime, so far be it from me to criticize actual criminals’ profit-maximizing strategies.  It’s easy for me to nitpick, but I’m not the one strapping on my mask and trying to earn a (dis)honest dollar every day.  But have a look at this Reuters story from Tuesday. 

In it, we learn that the SEC and the Secret Service are investigating a sophisticated computer hacking group known as “FIN4” that allegedly “has tried to hack into email accounts at more than 100 companies, looking for confidential information on mergers and other market-moving events. The targets include more than 60 listed companies in biotechnologyand other healthcare-related fields, such as medical instruments, hospital equipment and drugs.”  Apparently their plan is to harvest this information and then trade on it.  Nobody knows where FIN4 is from.  They could be overseas, but supposedly their English is flawless and they have a deep knowledge of how financial markets work, so maybe they’re in the United States.  At one level, a little terrifying!

But this group hasn’t devised a complex, superpowered algorithm to steal information. Instead, it’s allegedly stealing information the (sort of) old fashioned way: through social engineering. The Reuters story explains that FIN4 “used fake Microsoft Outlook login pages to trick attorneys, executives and consultants into surrendering their user names and passwords.” In at least one case, “the hackers used a confidential document, containing significant information that they had already procured, to entice people discussing that matter into giving their email credentials.”

I have two main thoughts.  First, sound information handling practices, and appropriate wariness among professionals using email, still go a long way toward securing confidential data within organizations.  It’s often not the most technologically advanced tactics that yield the worst data breaches.  Second, FIN4 has embarked on a complex money-making plan.  There may be many uses of this information, but one of them seems to be trading securities in the public markets.  That’s not as simple as it seems.  If you’re doing that, you’re on the grid and can’t really hide.  FINRA sees all of those trades and it isn’t that hard for regulators to find out who is making them.  When the Consolidated Audit Trail comes online,*  it will be substantially easier and faster. In the meantime, broker-dealers are obligated to identify who their customers are.  If those people have electronic connections to the ones involved in the hacking, those links could be enough for the SEC to get an asset freeze before profits are siphoned overseas. 

What FIN4 is allegedly doing is scary, but they haven’t yet built a criminal ATM.



* Speaking of the Consolidated Audit Trail, when is that thing coming online? 


FCC Stakes Out Privacy Territory in Broadband Privacy Workshop

If you thought all the action in privacy regulation centered around the Federal Trade Commission, the Federal Communications Commission would like you to think again. Yesterday, April 28, the FCC held a 3-plus hour workshop that started the regulatory “conversation” on the manner in which the FCC can or should regulate consumer broadband privacy.

Chairman Wheeler kicked off the event with opening remarks that included this unequivocal statement: “Privacy is unassailable.” He also said that “changes in technology do not affect our values.” From these words and the text of the FCC’s “Open Internet” order released earlier this year, not to mention the FCC’s recent $25 million data breach consent decree with AT&T, it is clear the FCC intends to be involved in regulating consumer privacy.

Yesterday’s workshop follows the recent Open Internet order in which the agency determined it would apply certain aspects of its Title II authority to the Internet (namely, certain “common carrier” provisions of the Communications Act). The order has broad impact on issues like consumer access to broadband content that have been widely written about. But what the order means for privacy is that the FCC’s rules on “customer proprietary network information” or CPNI, which have historically applied to traditional telephone companies and interconnected VoIP providers, may apply more broadly in some form or fashion to others in the broadband ecosystem—particularly, broadband Internet access providers.

By way of background, CPNI (defined in Section 222(h)(1) of the Communications Act) is information collected by telecommunications carriers about their customers.  CPNI includes things like “quantity, technical configuration, type, destination, location, and amount of use of a telecommunications service” that a customer subscribes to and billing information. It is a fairly specific definition, and it doesn’t include personal information like name, phone number, address, etc. 

Section 222(a) of the Communications Act requires telecommunications carriers to protect the confidentiality of customer information, and Section 222(c) restricts the ability of telecommunication carriers to use, disclose, or permit access to individually identifiable CPNI without the customer’s approval or as required by law. 

The Open Internet order makes plain that the FCC intends to apply these CPNI confidentiality provisions (or some form of them) to broadband and broadband Internet access providers. But the FCC will need to adopt new rules to apply the CPNI provisions of the statute in this way. 

Yesterday’s workshop started that process with a discussion among stakeholders, though no formal rulemaking has been launched. Panelists discussed the privacy implications of broadband Internet access (for example, the kind of data broadband providers have access to) as well as specific concerns with applying Section 222 to broadband Internet access services. One common theme was the potentially overlapping jurisdiction of the FTC and the FCC in the area of privacy. But, significantly, one thing the FCC brings to the table in this area is general rulemaking authority, which the FTC lacks. 

We’ll have to watch and wait to see if a notice of inquiry, notice of proposed rulemaking, or other agency guidance will come.

The FCC typically uploads events like this to its archive, so check here or here in a few days if you would like to view the full event.


Physician Practices Be Alert: You Might Be Violating HIPAA If You Produce Medical Records In Response To State Court Subpoenas

Over the past months, my experiences with physician practices have made me realize that many practices do not understand how HIPAA applies to subpoenas for medical records.  More worrisome, I suspect that many practices nationwide routinely violate HIPAA when they receive a subpoena.

Here’s what I’ve observed:  Practices receive state court subpoenas that are signed by lawyers and that demand the production of medical records, and the practices automatically assume they must produce the records.  This is a dangerous assumption—the production of the records may very well violate HIPAA.

Here’s what HIPAA requires in general terms:  If a practice receives a state court subpoena for medical records that is signed by a lawyer, the practice should not produce the records unless (1) the practice also receives (a) a court order requiring production, (b) a HIPAA “qualified protective order” that’s been entered in the lawsuit, (c) a HIPAA compliant authorization from the patient that authorizes the disclosure demanded by the subpoena, or (d) certain other matters designated by HIPAA’s rule concerning subpoenas, or (2) the practice takes certain additional actions required by HIPAA’s rule for subpoenas. 

If a practice receives such a subpoena without receiving any of these “additional” items or taking these “additional” actions, the practice will likely violate HIPAA if the records are produced.

Here’s what practices should do.  Because this area of HIPAA is somewhat complex and difficult for practices to navigate on their own, practices should consult with legal counsel when they receive such a subpoena.  Legal counsel can advise whether HIPAA permits the disclosure, whether the practice needs to object to the subpoena, and whether other actions should be taken.  On numerous occasions, we have reviewed such subpoenas, determined that they did not comply with HIPAA, and sent a letter objecting to the subpoena, and the practice never heard from the parties again. 

Take away:  If you receive a state court subpoena signed by a lawyer demanding the production of medical records, do not automatically produce the medical records.

BYOD: Five Things To Consider When Creating Your Policy

“BYOD” or “bring your own device” (also known as the “consumerization of IT”) is a fact of life in today’s workplace. BYOD refers to the practice of using personally owned devices—like smartphones, tablets, and laptops—for work purposes and allowing these devices to connect to company networks and applications. According to a Gartner study released in late 2014, 40% of U.S. employees working for large companies use personally owned devices for work purposes. Of those who reported using personally owned devices, only 25% were required by their employers to do so.  And of the 75% who chose to use their personally owned devices for work, half did so without their employers’ knowledge.

If that last statistic doesn’t alarm you a little, it should. 

BYOD can be great for productivity, but it also creates risk for companies. Why? Because personally owned devices are not managed or controlled by company IT, and security of the device is in the hands of the employee. In other words, these devices can be a source of company data loss or data leakage. For example, if an unencrypted personal laptop with company data on it gets lost or stolen, you may have a data breach on your hands. 

So how do you address these concerns?  Start with a written, employee-acknowledged BYOD policy. 

Here are five things to consider as you develop your policy:

1.      Start by building an interdisciplinary team to create the policy. The team should include IT, human resources, legal, compliance, and employees who use their personally owned devices. BYOD is not just an IT issue but also a legal and business issue. Different perspectives will help lead to a policy that fits your organization’s needs and is capable of being followed.


2.      Develop the goals of the policy. Security should be a goal but productivity is also important. Cost savings may also be an objective. These, and other goals, may be in tension at times. In the end, you want to develop a policy that strikes the right balance for your company. Consider a BYOD policy that ensures the only way enterprise data flows into or out from a device is through enterprise systems (e.g., avoid emailing business information to employee personal email accounts).


3.      Determine which employees are covered and how they are covered. There may be different policies for different types of employees based on job type or function. For example, the policy for exempt employees may be different than the one for non-exempt employees. Consider whether all employees need to be able to use tablets (for example) to access corporate data for work purposes. Be sure to consult with counsel about employment laws and regulations that may apply in this area.


4.      Decide which devices/uses are permitted and how the policy applies to each. For example, the BYOD team should conduct “app reviews” to decide what apps are okay for business use and what apps are not allowed.  Particular types of smartphones, tablets, or laptops may not meet the company’s security requirements and, if not, should not be permitted. Also, the policy for smartphones may be different than the policy for laptops because they are different devices used in different ways and may pose different security risks.


5.      Build in training so that each employee knows the policy and how it applies to him or her. In the end, security is about people just as much as it is about technology.


This isn’t an exhaustive list of considerations, but it will help get you started crafting a BYOD policy tailored to your company. It’s important that you do so, if you haven’t already.


You are What You Keep

Suffering a data breach is bad enough. As often as it appears to happen, companies that are affected by a breach still shoulder a considerable burden. Management must stop the trains to identify the cause and scope of the breach—and then prepare for the aftermath. Lawyers are involved. The company’s brand is at risk. And the costs—employee time, legal fees, security consultants—quickly escalate.

But what if you determine that your company didn’t really need the information that was exposed? Suppose you find out that the breach involved a file that contained drivers’ license numbers or even credit card information, but your company had virtually no administrative need for that information? Or suppose the data pertained to transactions by young adults 7 years ago – and it is highly unlikely that any of the information is still relevant (much less accurate). This is the kind of discovery that lends insult to injury. Your company is forced to stop and invest significant funds to respond to a data breach relating to data that you don’t have any use for anymore for its marketing or operational efforts.

The surest way to avoid this problem is to review and assess the way you currently collect, retain, and store information. Here are a few items to consider:

·       Collection – Do you really need all of the personal information that you are collecting from consumers? Review your intake procedure and revise it to collect only what you need for operational or marketing purposes. Also, are you even aware of all of the different portals through which your company may be collecting data from consumers? Be sure you’ve done that so that you can assure that you are doing a full assessment. Do you have someone in your organization responsible for tracking the types of data you are collecting and the different processes through which you are collecting the data?

·       Retention – How long are you storing personal information? And for what purposes? Are your practices consistent with PCI standards? What is your current retention policy and are you following it? There are federal and state laws that may govern the retention, disposal or destruction of your data. Be familiar with those laws. Within the confines of applicable laws, be sure you are not holding on to unnecessary or outdated data that would cause you intolerable frustration in the event it was breached. Do you have someone in your organization responsible for overseeing retention and disposal?

·       Third Party Partners and Vendors – If you are sharing personal information with other parties (which, of course, needs to be disclosed to consumers in your privacy policy), be sure that your agreements with those parties contain appropriate safeguards. Are you requiring your vendors to secure personal information and prohibit the disclosure of that information? What happens in the event of the breach? Who bears the cost of notification? Are you vendors required to indemnify you if their mistakes lead to actions against your organization?


There is a simple rule that applies in a data breach: You are what you keep. So be careful with what information you currently collect and retain. Talk to your lawyer about whether certain information that you may consider to be “stale” may be properly and legally disposed. And, more importantly, consider revising your practices going forward so you don’t continue to collect or retain any stale or unnecessary information going forward.

SEC Releases Results of Cybersecurity Exam Sweep

Ed. Note: This entry is cross posted from Cady Bar the Door, David Smyth's blog offering Insight & Commentary on SEC Enforcement Actions and White Collar Crime.

We’re behind on this, but better (a little bit) late than never. Last month the SEC’s Office of Compliance, Inspections and Examinations released the first results of its Cybersecurity Examination Initiative, announced in April 2014 (and discussed here). As part of the initiative, OCIE staff examined 57 broker-dealers and 49 investment advisers to better understand how these entities “address the legal, regulatory, and compliance issues associated with cybersecurity.”  

What the Exams Looked For

In the exams, the staff collected and analyzed information from the selected firms relating to their practices for: ♦ identifying risks related to cybersecurity; ♦ establishing cybersecurity governance, including policies, procedures, and oversight processes; ♦ protecting firm networks and information; ♦ identifying and addressing risks associated with remote access to client information and fund transfer requests; ♦ identifying and addressing risks associated with vendors and other third parties; and ♦ detecting other unauthorized activity.

Importantly, the report is based on information as it existed in 2013 and through April 2014, so it’s already somewhat out of date.


The Good News

The report includes some good news about how seriously the SEC’s registered entities are taking cybersecurity.

·       The vast majority of examined broker-dealers (93%) and investment advisers (83%) have adopted written information security policies.

·       The vast majority of examined broker-dealers (93%) and investment advisers (79%) conduct periodic risk assessments to identify cybersecurity threats, vulnerabilities, and potential business consequences.

·       The vast majority of examined firms report conducting firm-wide inventorying, cataloguing, or mapping of their technology resources. Smart.

·       Many firms are utilizing external standards and other resources to model their information security architecture and processes. These include standards published by National Institute of Standards and Technology (“NIST”), the International Organization for Standardization (“ISO”), and the Federal Financial Institutions Examination Council (“FFIEC”).

Encouraging! But the report didn’t bring all good tidings.


The Bad News

Here are some of the less auspicious facts:


·       88% of the broker-dealers and 74% of the advisers reported being the subject of a cyber-related incident.

·       Most of the broker-dealers (88%) require risk assessments of their vendors, but only 32% of the investment advisers do. 

·       Related to that, most of the broker-dealers incorporate requirements relating to cybersecurity risk into their contracts with vendors and business partners (72%), but only 24% of the advisers incorporate such requirements. Fewer of each maintain policies and procedures related to information security training for vendors and business partners authorized to access their networks.

·       A slight majority of the broker-dealers maintain insurance for cybersecurity incidents, and only 21% of the investment advisers do.    


The Rest

Almost two-thirds of the broker-dealers (65%) that received fraudulent emails seeking to transfer funds filed a Suspicious Activity Report with FinCEN, as they’re likely required to do. The report then notes that only 7% of those firms reported the incidents to other regulators or law enforcement. It’s curious to me why the SEC would expect other reports to happen. With the SAR obligations in place, those firms probably, and reasonably, think all the necessary reporting has been done after the SAR has been filed. Also, these firms’ written policies and procedures generally don’t address whether they are responsible for client losses associated with cyber incidents.  Along these lines, it might be that requiring multi-factor authentication for clients and customers to access accounts could go a long way toward pushing responsibility for those losses on the users. 

But don’t take my word for it. Read the report yourself, linked above and here.

FTC Commissioner Comments on Consumer Privacy Bill of Rights

Last week, we posted about the Consumer Privacy Bill of Rights “discussion draft” released by the Obama Administration. On Thursday, March 5, at the annual U.S. meeting of the International Association of Privacy Professionals (which I attended), FTC Commissioner Julie Brill answered questions about her take on the bill and other policy issues. Here are just a few comments from that discussion that merit a follow-up post:

  • Commissioner Brill stated in no uncertain terms that the draft bill is not protective enough of consumers. At various times, she said there are “serious weaknesses in the draft,” “there’s no there there,” it needs “some meat,” and “where are the boundaries?” She mentioned a specific example of a more consumer-protective approach relating to consent to certain data practices. She indicated she would like to see the bill require affirmative express consent of the individual for (a) material retroactive changes to a privacy statement and (b) use of sensitive information out of context of the transaction in which it was collected.
  • Although Section 402 of the draft bill provides that the FTC’s unfair and deceptive authority under Section 5 of the FTC Act remains intact, Commissioner Brill expressed concern that it is not clear enough in the bill that the FTC would retain its full authority to enforce the “common law” of privacy as developed in its prior enforcement actions. 
  • Will anything come of the bill this Congress? Commissioner Brill said she expects it’s unlikely given other legislative priorities at this time.

Commissioner Brill commended the Administration for grappling with tough issues and working to improve privacy overall. However, hers is one more voice calling for additional work on broad federal consumer privacy legislation.

Consumer Privacy Bill of Rights Act - A Mixed Bag

Late last week, President Obama released a “discussion draft” of the Administration’s long awaited Consumer Privacy Bill of Rights Act.  At first blush, the results are a mixed bag:  some good, some not so good, much work among stakeholders left to be done.

It didn’t take long for consumer advocates, and even one FTC Commissioner, to say the draft legislation doesn’t go far enough.  The Internet has been rife with posts this week about the bill’s problems and shortcomings.  In summary, for most, the bill landed like a lead balloon.

Still, the Administration released the bill as a “discussion draft”—signaling the draft legislation is a just a step and an invitation for further conversation.  For a measured perspective considering the bill through this lens, read former Obama Administration official Nicole Wong’s thoughtful article

While it’s certainly far from perfect, my take is that the bill isn’t all bad.  Here are just a few initial pros and cons to the bill that I’ve identified (in no particular order):

  • Pro:  many principles are based on fair information practices familiar from existing federal statutes, flexibility and consideration of measures that are reasonable in context, availability of safe harbor protections, exceptions for de-identified data, delayed enforcement to allow parties time to adjust to the law’s requirements.
  • Con: loosely defined requirements, definitional uncertainty, preemption and enforcement concerns.

One item of note is that the security provisions in Section 105 (a) codify, at a very high level of generality, some of the principles that we’ve been advising our clients about:  for example, taking steps to identify internal and external risks to privacy and security of personal data and implementing and regularly assessing safeguards to control risks.  (Of course, it’s a separate thing all together to have recommendations take on the force of law.)

In the end, it may have been inevitable that this bill would be a disappointment to some.  After all, the public has been waiting on it since 2012.  During that time, there have been many, many high-profile breaches of consumer information.  The appetite for more privacy and security protections has only grown over time.  But it will take a delicate balance to provide desired protections while at the same time making legal requirements workable for both consumers and the businesses offering products and services consumers want.

To be sure, there will be more to come from the Consumer Privacy Bill of Rights—stay tuned.

Two-Factor Authentication May Be Coming to a Bank Near You

Ed. Note: This entry is cross posted from Cady Bar the Door, David Smyth's blog offering Insight & Commentary on SEC Enforcement Actions and White Collar Crime.

When I was at the SEC and online broker-dealers’ customers were the victims of hacking incidents, I used to wonder, why don’t the broker-dealers require multi-factor authentication to gain access to accounts? It was a silly question. I knew the answer. Multi-factor authentication is a pain and nobody likes it.

Do you know what it is? Here’s what Wikipedia says, so it must be true:

Multi-factor authentication (MFA) is a method of computer access control which a user can pass by successfully presenting authentication factors from at least two of the three categories:

·       knowledge factors (“things only the user knows”), such as passwords

·       possession factors (“things only the user has”), such as ATM cards

·       inherence factors (“things only the user is”), such as biometrics.

The idea is, hackers might figure out your password, but they won’t be able to figure out a number that changes every 30 seconds on a card you carry or on your cell phone. They won’t be able to replicate your fingerprint. That’s the idea, anyway. Brokers and banks have been loathe to require multi-factor authentication because it’s inconvenient and customers often hate it.

But here comes Ben Lawsky, the Superintendent of New York’s Department of Financial Services, who just unveiled a number of proposals to increase cybersecurity at banks under his jurisdiction. One of these is to require that banks use multi-factor authentication. This move could take a lot of the economic pressure off banks that would otherwise like to implement this control for its customers, but have been unwilling to do so for fear of losing those customers to rivals. If everybody has to do it, there’s not a lot of fear from imposing it unilaterally.

That’s not all Lawsky has in mind. His proposal also includes:

·       requiring senior bank executives to personally attest to the adequacy of their systems guarding against money laundering;

·       ensuring that banks receive warranties from third-party vendors that those providers have cybersecurity protections in place;

·       random audits of regulated firms’ transaction monitoring systems, meant to catch money laundering; and

·       incorporating targeted assessments of those institutions’ cybersecurity preparedness in its regular bank examinations.

Lawsky’s proposals could be a big deal. Stay tuned.

Big Announcement, Small UAS: FAA Launches Commercial Drone Proceeding

            Unless you have been completely disconnected from all media, you are probably already aware that on Sunday, February 15, 2015, the FAA announced the release of its long-awaited rules to govern commercial sUAS (small unmanned aircraft systems) operations in the United States. The FAA’s proposed sUAS rules arrived like a barely-late valentine or box of candy, with the recipients hoping to read loving prose and enjoy fresh, rich chocolates. At this point, of course, the rules are merely a proposed regulatory regime (as embodied in a document that is called a “Notice of Proposed Rulemaking” or “NPRM”), and it will surely take many months—probably a couple of years—for the rules to be finalized and adopted, and to go into effect. (Only then will we know for sure whether the valentine message was really a “dear John” letter or whether the candy was stale and half-eaten.) It is important to understand that, for now, the FAA’s current prohibition on commercial UAS operations remains in effect, except for operators that have obtained a Section 333 Exemption from the FAA. (To date, nearly 30 entities have received exemption grants from the FAA.)

            The proposed regulatory regime was described by the FAA on a February 15 press conference call as a “very flexible framework” that will “accommodate future innovation in the industry.” While industry stakeholders may ultimately disagree over just how flexible the proposed rules are or should be, stakeholders do generally agree that the FAA’s release of the NPRM is a big step (albeit somewhat overdue) in the right direction. You can access the FAA’s NPRM here. (The FAA also published a “fact sheet” as well as a short summary of the highlights of the proposed rules.)

Proposed sUAS Rules

            Among the various limitations that the FAA has proposed for commercial sUAS operations are the following (caveat: this is neither an exhaustive nor detailed list of all the operational limitations and requirements proposed in the NPRM):

  • Vehicles subject to the sUAS rules will be defined as aircraft that weigh less than 55 pounds (25 kg)
  • Only visual line-of-sight (“VLOS”) operations will be allowed; i.e., the small unmanned aircraft must remain within VLOS of the operator or visual observer (“VO”) (i.e., if a VO is used; the proposed rules allow—but do not require—the use of a VO)
  • No person may act as an operator or VO for more than one unmanned aircraft operation at one time
  • Pilots of sUAS will be considered “operators.” Operators will be required to:
    • Pass an initial aeronautical knowledge test at an FAA-approved knowledge testing center;
    • Be vetted by TSA (Transportation Security Administration);
    • Obtain an unmanned aircraft operator certificate with an sUAS rating (like existing pilot airman certificates, it will never expire);
    • Pass a recurrent aeronautical knowledge test every 24 months;
    • Be at least 17 years old;
    • Make available to the FAA, upon request, the sUAS for inspection or testing, and any associated documents/records required to be kept under FAA rules;
    • Report an accident to the FAA within 10 days of any sUAS operation that results in injury or property damage;
    • Conduct preflight inspections to ensure the sUAS is safe for operation.
  • At all times the small unmanned aircraft must remain close enough to the operator for the operator to be capable of seeing the aircraft with vision unaided by any device other than corrective lenses (the use of binoculars would not satisfy this restriction)
  • sUAS operations may not occur over any persons not directly involved in the operation
  • sUAS operations must occur during daylight hours only (official sunrise to official sunset, local time)
  • Small unmanned aircraft will be required to yield right-of-way to other aircraft, manned or unmanned
  • Small unmanned aircraft will be allowed to operate with a maximum airspeed of 100 mph (87 knots)
  • Small unmanned aircraft will be allowed to operate at a maximum altitude of 500 feet above ground level
  • sUAS operations will be permitted to occur only when conditions allow minimum weather visibility of 3 miles from the control station
  • Limitations in airspace classes:
    • No sUAS operations will be allowed in Class A (18,000 feet & above) airspace
    • sUAS operations will be allowed in Class B, C, D and E airspace only with ATC (Air Traffic Control) permission
    • sUAS operations in Class G airspace will be allowed without ATC permission

Many of these requirements dovetail with (or are at least similar to) the limitations, requirements, and restrictions that have been imposed by the FAA in its various Section 333 Exemption decisions. In fact, some of the rules proposed in the NPRM would be less restrictive and more flexible than those imposed on operators in certain Section 333 Exemption decisions.

“Micro” UAS and Model UAS

        NPRM proposes a “micro” UAS classification that contemplates operations of small unmanned aircraft that weigh up to 4.4 pounds (2 kg; small-scale sUAS and hence “micro”), only in Class G airspace, only during daylight hours, at altitudes no higher than 400 feet AGL. Micro UAS operations would be permissible over people not involved in the operation of the unmanned aircraft, provided the operator certifies he or she has the requisite aeronautical knowledge to perform the operations. Other, additional restrictions, as set forth in the NPRM, would apply to the proposed micro UAS classification.

With respect to hobbyists who fly model unmanned aircraft for recreational purposes, the NPRM does not propose to change the rules of the road for such hobbyists, so long as their operation of model UAS satisfies all of the criteria applicable to model unmanned aircraft.

Presidential Memorandum: UAS Privacy Framework 

           In addition to—and presumably in concert with—the FAA’s release of the NPRM, President Obama on the morning of Sunday, February 15, issued a Presidential Memorandum aptly titled “Promoting Economic Competitiveness While Safeguarding Privacy, Civil Rights, and Civil Liberties in Domestic Use of Unmanned Aircraft Systems” (“UAS Privacy Memorandum”) in order to create a framework to begin to address some of the privacy concerns that have been voiced by the American public (and even by a U.S. Supreme Court Justice). (Reports had surfaced months ago suggesting that the UAS Privacy Memorandum was in the works.)

            Among other things, the UAS Privacy Memorandum requires federal agencies, “prior to deployment of new UAS technology and at least every 3 years, [to] examine their existing UAS policies and procedures relating to the collection, use, retention, and dissemination of information obtained by UAS, to ensure that privacy, civil rights, and civil liberties are protected. Agencies shall update their policies and procedures, or issue new policies and procedures, as necessary.” In addition, federal agencies must “establish policies and procedures, or confirm that policies and procedures are in place, that provide meaningful oversight of individuals who have access to sensitive information (including any PII [personally identifiable information]) collected using UAS . . . [and] require that State, local, tribal, and territorial government recipients of Federal grant funding for the purchase or use of UAS for their own operations have in place policies and procedures to safeguard individuals’ privacy, civil rights, and civil liberties prior to expending such funds.” These requirements represent a logical and reasonable starting point for ensuring privacy protection at the federal level if and when agencies engage in UAS operations.

            Also of significance, the UAS Privacy Memorandum tasks NTIA (the National Telecommunications and Information Administration) to initiate, by mid-May 2015, a “multi-stakeholder engagement process to develop a framework regarding privacy, accountability, and transparency for commercial and private UAS use.” As such, the UAS Privacy Memorandum provides a formal structure in which various aspects of privacy that will be implicated by commercial and private UAS operations can and will be debated and addressed. As with the sUAS rules themselves, only time will tell whether the final results of the UAS Privacy Memorandum are weak or strong, satisfactory or dissatisfactory to UAS stakeholders, including the American public generally.  

 Issuance of the NPRM Doesn’t Mean You Can Ignore State Law

            State and local jurisdictions continue to contemplate legislation governing the use of UAS by individuals, commercial entities, and law enforcement, and much remains to be written about such ongoing efforts. As I’ve written previously, the State of North Carolina enacted several provisions governing UAS in 2014. Until the FAA’s rulemaking results in final rules, many of the North Carolina provisions are of little practical significance. Nevertheless, drone enthusiasts in North Carolina must remain mindful of these state-specific laws, as should any UAS operator in any other state with applicable laws in place.

More to Come

            For many industry stakeholders and drone enthusiasts, the release of the NPRM surely represents a “Harry Potter” moment: when a new Harry Potter book hit the bookshelves, people would line up for hours to get a copy and would stay up all night and skip school or work in order to read it. I’m certain that a similar phenomenon has been underway since Sunday, February 15, when the NPRM first became available on the FAA’s website. (I would hazard a guess that the FAA’s website had more traffic on February 15 than any other day in the history of While I don’t recommend that anyone play hooky from work or school in order to read the 195-page NPRM, I do encourage you to celebrate President’s Day (February 16) by reviewing the NPRM—while the FAA Staff enjoys its well-deserved federal holiday—and I do wish you all happy reading and sweet, post-valentine dreams. 

            Onward and upward (but, until the FAA issues final rules, not without an exemption or not more than 400 feet, please, with a model drone used solely for recreational purposes)!