Privacy and the Internet of Things – FTC Workshop (The Smart Home)

Attribution: Vovastepanov
Attribution: Vovastepanov

The Federal Trade Commission (“FTC”) held a public workshop on November 19, 2013 to explore consumer privacy and security issues related to the Internet of Things (“IoT”).  After briefly describing what the IoT is and the intended focus of the FTC workshop, this post will highlight excerpts from the panelists who spoke at the workshop.   I will limit this recap to the first part of the workshop that dealt with the home-based IoT (i.e., the “Smart Home”).  For a full transcript of the entire workshop, you can find that here.

The “Internet of Things” is a term used to describe the system made up of devices, each of which is enabled to communicate with other devices within an integrated information network.  Or, to use the Wikipedia definition, the “Internet of Things refers to uniquely identifiable objects and their virtual representations in an Internet-like structure.”  Often, we hear the individual devices within the IoT ecosystem labeled as “smart” devices.  Smart devices generally have the ability to communicate with consumers, transmit data back to companies, and compile data for third parties.

According to the FTC, “the workshop focused on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”  The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world.

Following the workshop, the FTC published questions and requested public comments by January 10th on issues raised at the workshop, including:

  • How can consumers benefit from the Internet of Things?
  • What are the unique privacy and security concerns and solutions associated with the Internet of Things?
  • What existing security technologies and practices could businesses and consumers use to enhance privacy and security in the Internet of Things?
  • What is the role of the Fair Information Practice Principles in the Internet of Things?
  • What steps can companies take (before putting a product or service on the market) to prevent connected devices from becoming targets of, or vectors for, malware or adware?
  • How can companies provide effective notice and choice?  If there are circumstances where effective notice and choice aren’t possible, what solutions are available to protect consumers?
  • What new challenges does constant, passive data-collection pose?
  • What effect does the Internet of Things have on data de-identification or anonymization?
  • How can privacy and security risks be weighed against potential societal benefits (such as improved health-care decision-making or energy efficiency) for consumers and businesses?
  • How can companies update device software for security purposes or patch security vulnerabilities in connected devices, particularly if they do not have an ongoing relationship with the consumer?  Do companies have adequate incentives to provide updates or patches over products’ lifecycles?
  • How should the FTC encourage innovation in this area while protecting consumers’ privacy and the security of their data?
  • Are new use-restrictions necessary to protect consumers’ privacy?
  • How could shifting social norms be taken into account?
  • How can consumers learn more about the security and privacy of specific products or services?
  • How can consumers or researchers with insight into vulnerabilities best reach companies?

Panelist Excerpts

Here are my favorite excerpts from the “Smart Home” panel, which was comprised of Carolyn Nguyen (Microsoft, Director of Technology Policy Group), Eric Lightner (DOE, PM of Advanced Technology Development), Michael Beyerle (GE Appliances, Manager of Marketing), Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer), Lee Tien (EFF, Senior Staff Attorney), and Craig Heffner (Tactical Network Solutions, Security Researcher).

1)      Excerpts from Carolyn Nguyen (Microsoft, Director of Technology Policy Group)

  • On the individual consumer: “[…] a unique aspect of the IoT, as far as the individual is concerned, is its potential to revolutionize how individuals will interact with the physical world and enable a seamless integration between the digital and the physical world as never before […] The IoT, with its network of sensors and potential to sense the environment, can help assist individuals and people to make optimized and context-appropriate decisions […] As the individual is increasingly objectified by the quantity of data available about them, it’s important that we have a dialogue today and now, as we are just at the dawn of the IoT, to create a viable, sustainable data ecosystem that is centered on the individual.”
  • On the IoT ecosystem: “Taking a look at the evolution and the emerging data-driven economy, this is how we all started, where a person shares data with another person that they have a good relationship with and can trust that the data won’t be misused. The terminology that I use is that the data is being actively provided to the individual. In the evolution going forward, we evolve from this model to where I share data with an entity for which I receive a service: a store, a bank, a post office. Again, this is usually an entity with whom I either have a good relationship with or know I can trust. And this is true, whether this is in the physical world or in the digital world. So if we evolve this a little bit further, where there is now such an entity may be able to share personal data with other entities, with or without my knowledge. We talk about the terminology, as this data that is being generated or inferred as data that is passively generated about me. In other words, I am not actively involved in this transaction.  So as we move further in the evolution, there is more and more data being shared. And furthermore, it is now also possible that other parties that are in my social network can share data about me. So for example, a friend uploading my photo into the service. In this view, it is already very difficult for an individual to control the collection and distribution of information about me. And traditional control mechanisms such as notice and consent begin to lose meaning, as the individual most often automatically gives consent without a true understanding of how the data is distributed or used.  Moving forward, into the Internet of Things with ubiquitous sensors, the situation is clearly further exacerbated. We’ve already heard about Fitbit, sensors in my shirt, sensors in pants that can tweet out information about me, my car giving out information about potholes in the street, average speed, etc. There are devices in my home that are giving information about activities, temperature, whether I am home or not. Devices in my workspace, as well as devices in a public space. So increasingly, the amount of data that will be generated, as was already mentioned this morning, would be primarily passively collected and generated. It is, however, in the data-driven economy, it is this flow of data that has the potential to create new benefits and new innovations and create a foundation for a new economy. Over-restriction of this flow can restrict the potential value, but lax regulation can clearly harm the individual and violate their rights.”

2)      Excerpts from Eric Lightner (DOE, Director of Smart Grid Task Force)

  • On energy usage data privacy: “So we started a number of, I would say, initiatives around this, centered on the consumer. A couple I will just mention quickly. One is called Green Button and that’s really an effort to standardize the information, the customer usage, the energy usage information that you can have access to through your utility in a standardized format and download that information and use that in different applications. We also stimulated the market by funding some developers of technology to look at, okay, if you have this standardized customer energy use and information, what kind of applications and services could we create around that. So we funded some companies to develop some of those technologies. That sort of gave rise to questions of privacy. Hey, I want to use my information, I want to look at it in a more detailed fashion. I probably want to share it with third parties for additional services to me, what are the privacy implications of that? So we started another initiative called the Voluntary Code of Conduct on Data Privacy.  This is something that is actively ongoing. We are working with utilities and a number of stakeholders to really figure out what sort of — just the baseline of protections and processes that we can put in place across utilities in a voluntary way. Many utilities are regulated by their states and they already have policies and laws about how to handle data, but it’s not consistent across the states, so we really wanted to try to develop a voluntary, consistent practice. So you, as a consumer, would then feel more comfortable about how that information is being used within the utility and what the process is for you to give consent to share that information with third parties of your choice for different products and services.”

3)      Excerpts from Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer)

  • On the current state of the IoT: “And what is at the center of that is this interesting development that, each of these manufacturers is pursuing a model where I build my device, I connect my device to my cloud, my manufacturer-specific cloud, and then I give you, as a consumer, an app for your smart phone. And it begs the question, where this goes. Where does all of this end up? […] If I end up with more apps on my phone to control the physical world than I have on my phone to begin with, to control all of the other stuff, it feels like we’ve failed the consumer in a big way. And so at SmartThings, what we are working on is actually bringing a solution into the middle of this. We’ve created a platform that is targeted at the smart home, initially, and to put in the palm of the consumer’s hand not one app per device, but rather one app. But more importantly, to allow these devices to work together.”
  • On data security and data ownership: “Our things and our data have to be secured. And we, as the consumer or the owner of our things, need to own the data that comes from those things. They are our things, it should be our data. Just because I bought it from a particular manufacturer doesn’t mean it’s their data. It’s my data. That sharing of that data then needs to be contextual […] These systems need to be highly reliable and available and they also need to be open.”

4)      Excerpts from Lee Tien (Electronic Frontier Foundation, Senior Staff Attorney)

  • On IoT privacy considerations:  “I’m not really a cheerleader for the Internet of Things. To me, it raises a huge number of privacy and security issues, to the extent that IoT devices entail ubiquitous collection of large amounts of data about what people do. And I mean, I think that’s the main thing, that what we are talking about is collecting data about people’s activities, and therefore that is always going to raise some very serious privacy issues. […] So with respect to the home, my starting point is probably pretty conventional. As Justice Scalia said in the 2001 Kyllo Thermal Imaging case, in the home, our cases show all details are intimate, because the entire area is held safe from prying government eyes. Now we are not discussing government surveillance today, but I think all consumer privacy, anyone who thinks about the privacy issues thoughtfully, is going to have an eye on what data about household activities or personal activities the government could end up obtaining, either directly from the devices or from IoT providers, whether using legal process or other less savory means.”
  • On smart meter technology:  “Smart meters are a good example. And in California we, along with the Center for Democracy and Technology, helped write very strong FIPPS-based approach to energy usage data that is in the hands of utilities, recognizing in California that there were a lot of serious privacy issues around the granular energy usage data.  I like to use this quote from Siemens in Europe a few years ago where they said, you know, we, Siemens, have the technology to record energy use every minute, second, and microsecond, more or less live. From that, we can infer how many people are in the home, what they do, whether they are upstairs, downstairs, do you have a dog, when do you usually get up, when did you get up this morning, when you have a shower. Masses of private data. And obviously, this is a European perspective, which is especially solicitous of privacy, and yet the ability to make those kinds of inferences from energy usage data is clearly there. Now in the California proceeding, one of the things that we do not do is we do not regulate anything about what the consumer, per se, can or can’t do with the data that they have. Indeed, the whole thing is, right now, very consumer empowerment based, because it is consumer consent that provides the main way that utilities can hand the information off or share it with someone else. […] We also use rules that are modeled after HIPAA business associate type rules, so that downstream recipients of data shared from the utilities are bound in a similar way.”
  • On IoT data security considerations: “I think that you have to worry also about the way that the wireless networking exposes data to interception. We are wary that industries who are moving into this space are not necessarily as mature about the security issues as those as, say, at Microsoft. The relatively cheap or lower grade devices may lack the computing resources or, for economic reasons, there will be less incentive to put good security in them. And fourth, that the security perimeter for IoT devices is actually rather different because, depending on where the endpoint devices are, there may be a higher risk of direct tampering. […] I think that one of the things that is going to be important in this area is also the ability of the consumer to exercise what we at the EFF call the right to tinker or right to repair. I think in the comments, there were some rather interesting points about various kinds of consumer rights that could be built into this area. But I think one of the most important is actually being able to know, inspect your device, and understand them, to know what they do, because transparency is going to be a big problem.”

5)      Excerpts from Craig Heffner (Tactical Network Solutions, Security Researcher)

  • On security of firmware and IoT devices: “And consumer devices typically, they don’t have any security, at least by today’s standards. I mean, you have simple things like vendors leaving backdoors in their products, either because it is something that the developer left in and they just forgot about or maybe they left it in so that when they get a customer support call, they can remote into the system and fix it for them and so it lowers, you know, the time they have to spend doing tech support and things like that. And we are not even dealing with sophisticated types of attacks to break a lot of these systems. I actually teach like a five day class on, you know, breaking embedded systems. And people – that’s why I’m trying to condense five days into five minutes here, but people are astounded at, you know, especially people from the security community who are used to breaking things like Windows and PCs and things like that, they don’t really have experience with embedded devices, are astounded at the lack of security that they have typically. […] They had simple vulnerabilities that anyone in the security community who looked at it would be able to break. And it doesn’t take a lot of technical expertise to do that. And I think the real reason why these exist, why we have these problems in embedded devices is there is no financial incentive to companies to make their devices secure. […] And these are simple things that people may not think of, and may not think through, but they can be very difficult to go back and change, especially in embedded products. Because updating the software, updating the firmware, is not necessarily trivial in many cases.”
  • On the everyday IoT consumer:  “Unfortunately, I don’t think that trying to educate users will get us where we need to be. You know, the mantra for years in computer security has been educate the user, educate the user. Well, guess what? We’ve had security problems for decades. That clearly isn’t working. Users don’t understand the technologies they are dealing with. I hear the term, people always say, people are so technologically — you know, they understand all this technology. No, they don’t. They have a phone with pictures on it and they point at the pictures. That is not understanding technology.”

The AvMed Data Breach Settlement: What’s it going to Cost?

$3,000,000, based on the recently proposed settlement agreement involving the 2009 AvMed data breach incident.


Once finally approved, this settlement would resolve the claims asserted against AvMed and would provide monetary relief to all affected customers, including customers who were not actually victims of an identity theft.  The proposed settlement in this case goes well beyond the credit monitoring offer that typically results from data breach class action settlements.  According to the plaintiffs’ unopposed motion to approve the settlement:

“All told, the Settlement is a tremendous achievement for the Plaintiffs and proposed Settlement Classes, and provides landmark relief that will serve as a model for other companies who face similar lawsuits.”

The Facts

AvMed, Inc. is a Florida-based health insurance provider.  In December 2009, two laptop computers were stolen from AvMed’s conference room.  The laptops contained the unencrypted personally identifiable information (PII) of 1.2 million AvMed customers.  The unencrypted PII consisted of customers’ names, addresses, Social Security numbers, and medical health information.

The Allegations

According to the affected customers, AvMed’s failure to properly secure their PII (in accordance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) standards) resulted in (1) the theft of some affected customers’ identities, and (2) with respect to all affected customers, the overpayment for insurance coverage.

The first claim (i.e., based on customers whose identities were stolen and suffered economic harm as a result) is fairly straight forward and uncontroversial.

The second claim (related to the overpayment of premiums of all affected customers) is a bit more novel.  This second claim is based on an unjust enrichment theory, which the Eleventh Circuit addressed prior to remanding this case back to the district court.  The Eleventh Circuit recognized the customers’ unjust enrichment claim stating that when AvMed charged customers, as part of premium payments, to fund the administrative costs of data security, then AvMed is unjustly enriched to the extent it subsequently fails to implement the data security measures.  This notion is premised on the fact that customers paid monthly premiums to AvMed, a portion of which was presumably allocated to the data security efforts that AvMed promised its customers.  And, of course, AvMed did not implement these promised data security efforts, but nevertheless retained the entirety of the customers’ premiums.  Accordingly, under this theory of unjust enrichment, the customers paid for undelivered services and thus are entitled to partial refunds of their premiums.

The Settlement

Under the terms of the settlement, AvMed agrees to create a $3M settlement fund. Customers who can show that they actually suffered identity theft as a result of the 2009 data breach can make claims to recover monetary losses associated with the identity theft.  Additionally, all affected customers (whether they suffered actual identity theft or not), will be entitled to claim $10 for each year that they paid premiums to AvMed, subject to a cap of $30.  The cash payments available to all affected customers provide reimbursement for the portion of their insurance premiums that AvMed should have allocated to data protection and security.

Additionally, under the settlement, AvMed is required to implement wide-ranging measures to ensure that its customers’ PII are protected, including:

  1. instituting mandatory security awareness and training programs for all company employees
  2. instituting mandatory training on appropriate laptop use and security for all company employees whose employment responsibilities include accessing information stored on company laptop computers
  3. upgrading all company laptop computers with additional security mechanisms, including GPS tracking technology
  4. adopting new password protocols and full disk encryption technology on all company desktops and laptops
  5. installing physical security upgrades at company facilities and offices to further safeguard workstations from theft
  6. reviewing and revising written policies and procedures to enhance information security

For Comparisons Sake

So, just for fun, here’s how this settlement stacks up against some other recent, high-profile data breach settlements:

  • Johansson-Dohrmann v. Cbr Sys., Inc., No. 12-CV-1115 (S.D. Cal. July 24, 2013) – established a $2.5 million fund to provide approximately 300,000 class members with two years of credit monitoring and identity theft reimbursement.
  • Beringer v. Certegy Check Services, Inc., No. 07­cv-01657 (M.D. Fla. Sept. 3, 2008) – established a $5 million fund to provide approximately 37 million class members with up to two years of credit monitoring and identity theft reimbursement.
  • In re Heartland Payment Sys. Inc. Customer Data Sec. Breach Litig., MDL No. 09-2046 (S.D. Tex. 2012) – established a $2.4 million fund from which to provide over 100 million class members with identity theft reimbursement.
  • Rowe v. Unicare Life and Health Ins. Co., No. 09-cv-02286 (N.D. Ill. Sept. 14, 2011) – established a $3 million fund to provide approximately 220,000 class members with one year of credit monitoring and identity theft reimbursement.

The Nordstrom Case: What’s in an Email Address?

Personal Identification Information (PII), according to the US District Court (Eastern Dist. of California) applying California’s Song–Beverly Credit Card Act of 1974 (“Credit Card Act”) (Cal. Civ.Code §§ 1747 et seq).  In the class action case Capp v. Nordstrom, a customer alleged that Nordstrom requested his email address in connection with a credit card transaction at a Nordstrom retail store for the purpose of sending him an e-receipt.  The customer further alleged that Nordstrom then used his email address to send him unsolicited marketing materials in violation of the Credit Card Act.  The issue, among others, the court was faced with was whether an email address is PII under the Credit Card Act.

Attribution: Vrysxy
Attribution: Vrysxy

The Facts

According to the customer, a Nordstrom cashier asked him to provide his email address to receive an electronic receipt.  Believing it was required to complete the transaction, the customer provided his email address to the cashier.  The cashier then typed the customer’s email address into the portable sales device, at which point in the transaction the customer’s credit card number and email address were recorded in the same portable device.  As expected, the customer later received an email with his receipt; however, according to the customer, he also received marketing and promotional materials from Nordstrom “on a nearly daily basis.”

The Credit Card Act

Under the Song-Beverly Credit Card Act, a company that accepts credit cards for business transactions cannot “request, or require as a condition to accepting the credit card as payment in full or in part for goods or services, the cardholder to provide personal identification information, which the person … or corporation accepting the credit card writes, causes to be written, or otherwise records upon the credit card transaction form or otherwise.”  As to the definition of PII, the statute states that PII means “information concerning the cardholder, other than information set forth on the credit card, and including, but not limited to, the cardholder’s address and telephone number.”

The Credit Card Act imposes civil penalties for violations “not to exceed two hundred fifty dollars ($250) for the first violation and one thousand dollars ($1,000) for each subsequent violation.”

The Decision

The statutory definition of PII makes no mention of email addresses. The district court noted that there is no published case deciding the question of whether an email address constitutes PII under the Credit Card Act.  Accordingly, without a controlling California Supreme Court decision on point, the district court was tasked with predicting how the California Supreme Court might decide the issue.

To do so, the district court pointed to a recent California Supreme Court case Pineda v. Williams–Sonoma Stores, Inc. (2011).  In Pineda, the California Supreme Court interpreted the words “personal identification information” to include a cardholder’s ZIP code.  The California Supreme Court’s analysis focused on the notion that a cardholder’s ZIP code can be used, together with the cardholder’s name, to locate his or her full address; and, importantly, a cardholder’s address and her ZIP code both constitute information unnecessary to the sales transaction that can be used for commercial purposes.  As the district court put it:

“In this case, an email address is within the scope of the statute’s broad terms concerning the cardholder as well because a cardholder’s email address pertains to or regards to a cardholder in a more specific and personal way than does a ZIP code.  Instead of referring to the general area in which a cardholder lives or works, a cardholder’s email address permits direct contact and implicates the privacy interests of a cardholder. Therefore, this Court predicts that the California Supreme Court would decide that an email address constitutes personal identification information as those terms are defined by section 1747.08(b) of the Credit Card Act.”

Nordstrom also argued that the Credit Card Act claim would be necessarily preempted by the CAN-SPAM Act if email addresses were determined to be PII.  The district court rejected this argument and held that the customer’s claims were not subject to CAN-SPAM’s preemption because the Credit Card Act applies only to email addresses and does not regulate the content or transmission of the underlying messages.

Introducing Startups to the Crowd: The SEC’s “Regulation Crowdfunding”

As I discussed in a previous post last spring, startups and investors alike have been eagerly awaiting action by the  U.S. Securities and Exchange Commission (“SEC”) to promulgate rules to facilitate equity-based crowdfunding.  Well, alas, the SEC has proposed crowdfunding rules as mandated by the Jumpstart Our Business Startups Act (the “JOBS Act”).  The JOBS Act, enacted back in April of 2012, is intended to enable startups and small businesses to raise capital through crowdfunding. The public comment period is set for 90 days, which means that equity-based crowdfunding could become a reality in early 2014.


The text of the SEC’s notice of proposed rulemaking is an ambitious 585 pages.  Unlike the SEC, I value brevity.  So, after providing a quick refresher on crowdfunding and the Securities Act, the remainder of this post will discuss the key provisions that potential crowdfunding issuers (i.e., the startups) and investors (i.e., the crowd) may find interesting.   Specifically, I will point out a few key aspects of the long-awaited crowdsourcing rule’s requirements for exemption from the registration requirements of the Securities Act.

Background: Crowdfunding and the Securities Act

Currently, the only type of crowdfunding that is authorized in the US are those forms that do not involve the offer of a share in any financial returns or profits that the fundraiser may expect to generate from business activities financed through crowdfunding.  Examples of crowdfunding websites that have become mainstream include the likes of indiegogo and Kickstarter.  These platforms prohibit founders (the project starter) from offering to share profits with contributors (i.e., equity or security transactions) because such models would trigger the application of federal securities laws.  And, under the Securities Act, an offer and sale of securities must be registered unless an exemption is available.

However, newly created Section 4(a)(6) of the Security Act, as promulgated under the JOBS Act, provides an exemption (the “crowdfunding exemption”) from the registration requirements of Securities Act Section 5 for certain crowdfunding transactions.  With the introduction of this exemption, startups and small businesses will be able to raise capital by making relatively low dollar offerings of securities to “the crowd” without invoking the full regulatory burden that comes with issuing registered securities.  Additionally, the crowdfunding provisions create a new entity, referred to as a “funding portal”, to allow Internet-based platforms to facilitate the offer and sale of securities without having to register with the SEC as brokers.  Together these measures were intended to help small businesses raise capital while protecting investors from potential fraud.

Startups:  Limits on Amount Raised

The exemption from registration provided by Section 4(a)(6) is available to a U.S. startup (the issuer) provided that “the aggregate amount sold to all investors by the issuer, including any amount sold in reliance on the exemption provided under Section 4(a)(6) during the 12-month period preceding the date of such transaction, is not more than $1,000,000.”

In the proposed rule, the SEC clarifies that only the capital raised in reliance on the crowdfunding exemption should be counted toward the limitation.  In other words, all capital raised through other means will not be counted against the $1M sold in reliance on the crowdfunding exemption.  As the SEC stated in its notice of proposed rule:

“If an issuer sold $800,000 pursuant to the exemption provided in Regulation D during the preceding 12 months, this amount would not be aggregated in an issuer’s calculation to determine whether it had reached the maximum amount for purposes of Section 4(a)(6).”

Startups: Limits on the Method of Crowdfunding

Under Section 4(a)(6)(C), an offering seeking the crowdfunding exemption must be “conducted through a broker or funding portal that complies with the requirements of Section 4A(a).”  This means that crowdfunding can only occur through an intermediary, and that intermediary must meet the requirements of either (1) a broker, or (2) a funding portal. The SEC proposed two related limitations here:

1)      Single intermediary – Prohibits an issuer from using more than one intermediary to conduct an offering or concurrent offerings made in reliance on the crowdfunding exemption.  For example, you couldn’t use both and for the same offering or even for different offerings when conducted concurrently.

2)      Online-only requirement – Requires that an intermediary (i.e., the broker or funding portal) effect crowdfunding transactions exclusively through an intermediary’s platform. The term “platform” means “an Internet website or other similar electronic medium through which a registered broker or a registered funding portal acts as an intermediary in a transaction involving the offer or sale of securities in reliance on Section 4(a)(6).”

According to the SEC’s notice, with respect to the online-only requirement:

“We believe that an online-only requirement enables the public to access offering information and share information publicly in a way that will allow members of the crowd to decide whether or not to participate in the offering and fund the business or idea.  The proposed rules would accommodate other electronic media that currently exist or may develop in the future. For instance, applications for mobile communication devices, such as cell phones or smart phones, could be used to display offerings and to permit investors to make investment commitments.”

A Quick Note about Funding Portals

As mentioned above, to fit within the crowdfunding exemption, the offering must be conducted through a broker or funding portal that complies with the requirements of Securities Act Section 4A(a).

Exchange Act Section 3(a)(80) (added by Section 304 of the JOBS Act), defines the term “funding portal” as any person acting as an intermediary in a transaction involving the offer or sale of securities for the account of others, solely pursuant to Securities Act Section 4(a)(6), that does not: (1) offer investment advice or recommendations; (2) solicit purchases, sales or offers to buy the securities offered or displayed on its platform or portal; (3) compensate employees, agents or other person for such solicitation or based on the sale of securities displayed or referenced on its platform or portal; (4) hold, manage, possess or otherwise handle investor funds or securities; or (5) engage in such other activities as the Commission, by rule, determines appropriate.”

Under the SEC’s proposed rules, the definition of “funding portal” is exactly the same as the statutory definition, except the word “broker” is substituted for the word “person”.  The SEC is making clear that funding portals are brokers (albeit a subset of brokers) under the federal securities laws.

Investors: Limits on Amount Invested

Under Section 4(a)(6)(B), the aggregate amount sold to any investor by an issuer, including any amount sold in reliance on the exemption during the 12-month period preceding the date of such transaction, cannot exceed: “(i) the greater of $2,000 or 5 percent of the annual income or net worth of such investor, as applicable, if either the annual income or the net worth of the investor is less than $100,000; and (ii) 10 percent of the annual income or net worth of such investor, as applicable, not to exceed a maximum aggregate amount sold of $100,000, if either the annual income or net worth of the investor is equal to or more than $100,000.”

Because the statutory definition above creates some potential ambiguity, the SEC’s rule seeks to clarify the relationship between annual income and net worth for purposes of determining the applicable investor limitation.  Essentially, the proposed rules take a “whichever is greater” method for measuring whether limitation (i) or (ii) applies.  As the rule proposes,

  • Where both annual income and net worth are less than $100,000, then the limitation will be set at the greater of (a) $2,000 or (b) the greater of (x) $5% of annual income or (y) 5% of net worth.
  • Where either annual income or net worth exceeds $100,000, then the limitation will be set at the greater of (a) 10% of annual income or (b) net worth; provided, however, in either case (a) or (b) may not exceed $100,000.

Related to investor limits, but more important for startups to understand, the proposed rules alleviate burdens associated with vetting investor suitability.  Specifically, the rule allows startups to reasonably rely on the efforts that the intermediary takes in order to determine that the amount purchased by an investor will not cause the investor to exceed investor limits.

Copyright and Web-DVR Broadcasting: The Latest Aereo and FilmOn X Decisions

Two recent decisions have added more discord among federal courts as to the question of whether technology that allows users to record copies of over-the-air broadcasts on remote servers for later web viewing violates broadcasters’ exclusive public performance rights under the Copyright Act.  On October 8, 2013 the US District Court for the District of Massachusetts aligned itself with the Second Circuit, holding that such services do not violate broadcasters’ performance rights in Hearst Stations v. Aereo (Aereo).  Reaching the exact opposite conclusion back on September 5, 2013, was the US District Court for the District of Columbia in Fox Television v. FilmOn X (FilmOn X).

The split among District Courts on this issue will likely lead to further appeals and decisions by the respective Court of Appeals.  Ultimately, the issue could be heard by the Supreme Court in the coming years, depending on whether the Courts of Appeals continue to be similarly split.

Before going to the current state of the law, this post will describe the technology that is at the heart of these copyright disputes by using Aereo as the example platform.

What is Aereo?

According to Aereo’s website, “Aereo is a technology platform that you can use to watch live broadcast television at home or on the go.” A potential Aereo user purchases a subscription from Aereo, which, in exchange, provides the user with a remote, cloud-based DVR to set and watch recordings.  The benefit to users is that the service only requires a compatible internet-enabled device, without the need to purchase antennas, boxes, or cables.  The concept is extremely simple:


Once a potential user becomes an Aereo member, the user logs in and is assigned a miniaturized, private, remote antenna and DVR.  Aereo offers technology to give consumers access to their antenna and DVR via a web browser and supported internet-enabled devices.  Once the user has connected to his remote Aereo antenna, the user can then access the Aereo platform to view all major broadcast networks live in HD.  Alternatively, the user can enable their remote DVR to set recordings and watch the broadcasts later whenever the user wants.

How does Aereo Work?

When Aereo decides to enter a particular geographic region or market, it installs an array of mini antennas.  Each of these mini antennas are no larger than the size of a dime.  A large number of mini antennas are aggregated on a circuit board, which also contains other electronic components essential to Aereo’s Internet broadcast system.Antenna

While the antenna may be assigned to an individual user, they are generally available for dynamic allocation by the tuner server.  Essentially, this means that a specific antenna is assigned to one specific individual user only when that user is watching television via Aereo, but is then assigned to a different user when the first user is done.  Nevertheless, no single antenna is used by more than one user at a single time, and all dynamic antennas are shared. The antennas are networked to a tuner router and server, which in turn link to a video encoder. The encoder converts the signals from the antennas into a digital video format for viewing on computers and mobile devices.

When a user selects a channel to watch through Aereo’s web or mobile app, the user’s request is sent to Aereo’s web server. The server sends a command to the tuner router, which then identifies an available antenna and encoder slot.  Once the antenna begins receiving the signal, data for the requested channel flows from the antenna to the antenna router and then to the video encoder, where it is stored on an Aereo remote hard drive in a unique directory created for the specific user.  The data then goes through the distribution endpoint, over the Internet to the web or mobile app for the user’s consumption.

The Dispute

In the recent Aereo case, Hearst claimed that Aereo’s services violate its exclusive rights under Section 106 of the Copyright Act to: (1) publicly perform, (2) reproduce, (3) distribute, and (4) prepare derivative works based on its copyrighted programming.  The Court’s analysis focused on the first claim relating to public performance, which will be discussed below.

The Court quickly rejected Hearst’s claim that Aereo infringed its exclusive right to reproduce its works, stating that “holding a media company directly liable just because it provides technology that enables users to make copies of programming would be the rough equivalent of holding the owner of a copy machine liable because people use the machine to illegally reproduce copyrighted materials.”  Similarly, with respect to its distribution right, the Court sided with Aereo, relying heavily on the fact that Aereo’s technology does not allow users to download (only stream) the copyrighted content.  Likewise, with respect to Hearst’s exclusive right to make derivative works, the Court quickly disposed of Hearst’s argument that by reformatting intercepted programming Aereo violated the broadcaster’s right to prepare derivative works.  As the Court reasoned, “Hearst has presented no legal authority nor is the Court aware of any for the proposition that Aereo’s technology creates a derivative work merely by converting programs from their original digital format to a different digital format compatible with internet streaming.”

Public Performance Right

Quickly dismissing the above claims, the Court focused its discussion on Hearst’s first claim regarding its exclusive right to publicly perform its copyrighted works.  The Copyright Act gives copyright owners of audiovisual works the exclusive right, among others, to “perform the copyrighted work publicly.”  Section 101 of the Act provides that “to perform” an audiovisual work means “to show its images in any sequence or make the sounds accompanying it audible.”  To make matters more confusing, the statute distinguishes between public and private performances.  Having become known as the “Transmit Clause”, Section 101 provides that “to perform a work publicly” means to transmit a performance of the work to the public, “by means of any device or process, whether the members of the public capable of receiving the performance […] receive it in the same place or in separate places and at the same time or at different times.”  For additional insight – or perhaps more confusion – the House Committee on the Judiciary’s Report to the Copyright Act (revised 1976) provides a discussion on the intended meaning of “perform” and “public performance”:

“Concepts of public performance and public display cover not only the initial rendition or showing, but also any further act by which that rendition or showing is transmitted or communicated to the public. Thus, for example: a singer is performing when he or she sings a song; a broadcasting network is performing when it transmits his or her performance (whether simultaneously or from records); a local broadcaster is performing when it transmits the network broadcast; a cable television system is performing when it retransmits the broadcast to its subscribers; and any individual is performing whenever he or she … communicates the performance by turning on a receiving set.”

The Decision

The District Court for the District of Massachusetts in Aereo relied on the Second Circuit’s holding that similar DVR technology does not infringe a copyright holder’s exclusive right to perform its work publicly.  In the 2008, the Second Circuit decided the Cablevision case, which held RS–DVR technology non-infringing of the original broadcaster’s public performance right because the technology’s manner of transmitting a recorded program to the viewer who recorded it did not constitute a public performance.  The Cablevision opinion concluded:

“In sum, we find that the transmit clause directs us to identify the potential audience of a given transmission, i.e., the persons “capable of receiving” it, to determine whether that transmission is made “to the public.” Because each RS–DVR playback transmission is made to a single subscriber using a single unique copy produced by that subscriber, we conclude that such transmissions are not performances “to the public,” and therefore do not infringe any exclusive right of public performance.”

Likewise, earlier this year, the Second Circuit applied its reasoning from Cablevision in WNET, Thirteen v. Aereo (WNET) to the very Aereo service that was before the District of Massachusetts Court.  In WNET, the Second Circuit affirmed their Cablevision decision and found that Aereo’s transmissions to subscribers also did not infringe.  The court described Cablevision’s holding as resting on two essential facts:

1)      The RS–DVR system created unique copies of each program a customer wished to record; and

2)      A customer could only view the unique copy that was generated on his behalf.

Adopting the Second Circuit’s rationale, the Massachusetts court found that Aereo’s system is consistent with the two key factors above because it (1) employs individually-assigned antennas to create copies unique to each user and (2) only at the user’s request.

The Split: FilmOn X

Not persuaded by the Second Circuit’s Cablevision or WNET decisions, the District Court for the District of Columbia came out the other way in the FilmOn X case.  Quite simply, its decision rested on a different, and not unreasonable, statutory interpretation of the Transmit Clause.  The FilmOn X court reasoned that what makes a transmission public is not the intended audience of any given copy of the program, but the intended audience of the initial broadcast.

At the end of the day, this battle of statutory interpretations may need to be settled by the Supreme Court, or – don’t hold your breath – an Act of Congress.

Software Patents: The Federal Circuit Goes on a WildTangent

In the next few weeks, the U.S. Supreme Court will decide whether to grant certiorari in the patent dispute between WildTangent and Ultramercial.  The case, if heard by the Court, would be a suitable vehicle to provide guidance on how 35 U.S.C. § 101 applies to computer implemented inventions.  In essence, the question presented would read something like:  At what point does a method patent on an otherwise abstract concept sufficiently claim reference to a computer (or computer-implemented service like the Internet) to make such an abstract concept patent eligible under 35 U.S.C. § 101?  To put this question in context, here’s a little background on the parties involved and the lower courts’ decisions.

The Infringer: WildTangent

WildTangent operates a games service that allows consumers around the world to access downloadable, online, and social games via the Internet. WildTangent’s service reaches over 20 million monthly gamers in the United States and Europe with a catalog of more than 1,000 games from nearly 100 developers.  Rather than paying to play, consumers can let an advertiser sponsor free game play sessions. To do so, the consumer must agree to display an advertisement before he is given access to the game.

The Patent Holder: Ultramercial, LLC

According to Ultramercial’s website, Ultramercial is a technology company that offers patented financial engines for monetizing online content along with integrated end-to-end solutions.  They developed an ad model consisting of “interactive, full-page ads that are always served in exchange for premium content or services.”  Ultramercial, more importantly for this post, is the patent holder of U.S. Patent No. 7,346,545 (the ‘545 patent).  The ‘545 patent claims a method for distributing copyrighted products (e.g., songs, movies, books) over the Internet where the consumer receives a copyrighted product for free in exchange for viewing an advertisement, and the advertiser pays for the copyrighted content.  You know, kind of like this:

The Dispute

In 2011, WildTangent challenged a ruling that it had infringed the ‘545 patent owned by Ultramercial, LLC.  Specifically, the ′545 patent claims a particular internet and computer-based method for monetizing copyrighted products, consisting of the following steps:

(1) receiving media products from a copyright holder, (2) selecting an advertisement to be associated with each media product, (3) providing said media products for sale on an Internet website, (4) restricting general public access to the media products, (5) offering free access to said media products on the condition that the consumer view the advertising, (6) receiving a request from a consumer to view the advertising, (7) facilitating the display of advertising and any required interaction with the advertising, (8) allowing the consumer access to the associated media product after such display and interaction, if any, (9) recording this transaction in an activity log, and (10) receiving payment from the advertiser.

At issue was whether Ultramercial’s process-type patent (also referred to as method patents) claims an abstract idea.  If so, the patent should be held invalid as abstract ideas have long been held to be non-patentable subject matter.  WildTangent argued that the idea of using advertising as a form of currency is abstract and vague, like the abstract concept of hedging, which proved patent-ineligible in the Supreme Court’s Bilski decision.  Not persuaded, Judge Rader, writing for the Court of Appeals for the Federal Circuit, upheld Ultramercial’s patent, stating that it “does not simply claim the age-old idea that advertising can serve as currency. Instead [it] discloses a practical application of this idea.”

Then, once again, after being ordered by the Supreme Court to reexamine the case in light of the Court’s Mayo decision, the Federal Circuit upheld its decision this past June and ruled that Ultramercial’s patents were not ineligible subject matter.

The Law of Patent Eligibility under Section 101

Section 101 of the Patent Act establishes the subject-matter eligibility requirement for all patents. It provides that “whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.” Section 101’s subject-matter eligibility is considered a threshold check, since the actual patentability of a claimed invention ultimately depends on more rigorous conditions, such as novelty, non-obviousness, and adequate disclosure.

Nevertheless, case law has established three categories of subject matter that fall outside the eligibility bounds of § 101: (1) laws of nature, (2) natural phenomena, and (3) abstract ideas.  As the Supreme Court explained in its Bilski decision, these non-patentable categories are considered the “the basic tools of scientific and technological work” and accordingly they are “part of the storehouse of knowledge of all men” and women.  Judge Rader, writing for the Federal Circuit, explained “an abstract idea is one that has no reference to material objects or specific examples—i.e., it is not concrete.”

A claim can embrace an abstract idea and be patentable when drawn to an application of that idea.  On the other hand, a claim is not patent eligible if the claim is to the abstract idea itself.  The inquiry here is to determine on which side of the line the claim falls: does the claim cover only an abstract idea, or instead does the claim cover an application of an abstract idea?  In determining on which side of the line the claim falls, the relevant inquiry is whether a claim, as a whole, includes meaningful limitations restricting it to an application, rather than merely an abstract idea.

Okay, but what exactly does “meaningful limitation” mean?  Well, it’s helpful, for starters, to describe what “meaningful limitation” does not mean.

First, if a claim covers all practical applications of an abstract idea, it is not meaningfully limited.  This much is not hotly disputed.  That is, it is well settled law that if a claim preempts all practical uses of an idea, then it is not patent eligible.  For example, one cannot claim the formula for converting binary-coded decimal numerals to pure binary numerals because this mathematical formula has no substantial practical application except in connection with digital computing.

Second, going a step further, it is also well established that even if a claim does not entirely preempt all application of an abstract idea, courts will still not find meaningful limitation to the extent the claim contains only insignificant constraints on the use or implementation of the invention.  For example, simply tying the claim to a relevant audience, a category of use, field of use, or technological environment will not carry the day; the claim will still be deemed patent-ineligible.

So now that it is clear what “meaningful limitation” is not, here’s where courts come out on the much less settled law of what “meaningful limitation” is.  A claim is said to be meaningfully limited if it requires a particular machine implementing a process or a particular transformation of matter.  A claim also will be limited meaningfully when, in addition to the abstract idea, the claim recites added limitations which are essential to the invention. In those instances, the added limitations do more than recite pre- or post-solution activity, they are central to the solution itself. And, in such circumstances, the abstract idea is not wholly pre-empted; it is only preempted when practiced in conjunction with the other necessary elements of the claimed invention.

The Federal Circuit’s Questionable Rationale

When assessing computer implemented claims, the Federal Circuit admits that the mere reference to a computer will not save a method claim from being deemed too abstract to be patent eligible.  However, in the same opinion, the Federal Circuit stated that the fact that a claim is tied to a computer is, nevertheless, an important indication of patent eligibility.  In Judge Rader’s opinion, “this tie to a machine moves it farther away from a claim to the abstract idea itself […] [and] makes it less likely that the claims will pre-empt all practical applications of the idea.”  The Federal Circuit goes even further and finds “meaningful limitation” when a claim references a computer in such a way that the computer plays a meaningful role in the performance of the claimed invention.  The difficulty in the standard created by the Federal Circuit is that “meaningful limitation” in the context of a computer-based method patent is now defined as one in which the computer plays a “meaningful role”.  This, to legal practitioners and research-and-developers alike, is a difficult concept to implement and strategize around.  One might even go as far as to say that the “meaningful limitation” standard is itself an abstract idea under the Ultramercial decision.

Ultimately, the Federal Circuit’s decision seems to rely heavily on procedural considerations.  As the Federal Circuit stated, “the district court erred in requiring the patentee to come forward with a construction that would show the claims were eligible. The district court held the asserted claim to be ineligible because it is “abstract.” In this procedural posture, the complaint and the patent must by themselves show clear and convincing evidence that the claim is not directed to an application of an abstract idea, but to a disembodied abstract idea itself.”

It would seem that the Supreme Court could – and should – provide useful guidance on this notion.  Otherwise, seemingly, any claimed invention that simply invokes computers or applications of computer technology would survive summary judgment and be put to a trier of facts, which of course would likely have significant impacts on the costs and burdens associated with patent litigation.