Category Archives: Big Data

Privacy and the Internet of Things – FTC Workshop (The Smart Home)

Attribution: Vovastepanov
Attribution: Vovastepanov

The Federal Trade Commission (“FTC”) held a public workshop on November 19, 2013 to explore consumer privacy and security issues related to the Internet of Things (“IoT”).  After briefly describing what the IoT is and the intended focus of the FTC workshop, this post will highlight excerpts from the panelists who spoke at the workshop.   I will limit this recap to the first part of the workshop that dealt with the home-based IoT (i.e., the “Smart Home”).  For a full transcript of the entire workshop, you can find that here.

The “Internet of Things” is a term used to describe the system made up of devices, each of which is enabled to communicate with other devices within an integrated information network.  Or, to use the Wikipedia definition, the “Internet of Things refers to uniquely identifiable objects and their virtual representations in an Internet-like structure.”  Often, we hear the individual devices within the IoT ecosystem labeled as “smart” devices.  Smart devices generally have the ability to communicate with consumers, transmit data back to companies, and compile data for third parties.

According to the FTC, “the workshop focused on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”  The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world.

Following the workshop, the FTC published questions and requested public comments by January 10th on issues raised at the workshop, including:

  • How can consumers benefit from the Internet of Things?
  • What are the unique privacy and security concerns and solutions associated with the Internet of Things?
  • What existing security technologies and practices could businesses and consumers use to enhance privacy and security in the Internet of Things?
  • What is the role of the Fair Information Practice Principles in the Internet of Things?
  • What steps can companies take (before putting a product or service on the market) to prevent connected devices from becoming targets of, or vectors for, malware or adware?
  • How can companies provide effective notice and choice?  If there are circumstances where effective notice and choice aren’t possible, what solutions are available to protect consumers?
  • What new challenges does constant, passive data-collection pose?
  • What effect does the Internet of Things have on data de-identification or anonymization?
  • How can privacy and security risks be weighed against potential societal benefits (such as improved health-care decision-making or energy efficiency) for consumers and businesses?
  • How can companies update device software for security purposes or patch security vulnerabilities in connected devices, particularly if they do not have an ongoing relationship with the consumer?  Do companies have adequate incentives to provide updates or patches over products’ lifecycles?
  • How should the FTC encourage innovation in this area while protecting consumers’ privacy and the security of their data?
  • Are new use-restrictions necessary to protect consumers’ privacy?
  • How could shifting social norms be taken into account?
  • How can consumers learn more about the security and privacy of specific products or services?
  • How can consumers or researchers with insight into vulnerabilities best reach companies?

Panelist Excerpts

Here are my favorite excerpts from the “Smart Home” panel, which was comprised of Carolyn Nguyen (Microsoft, Director of Technology Policy Group), Eric Lightner (DOE, PM of Advanced Technology Development), Michael Beyerle (GE Appliances, Manager of Marketing), Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer), Lee Tien (EFF, Senior Staff Attorney), and Craig Heffner (Tactical Network Solutions, Security Researcher).

1)      Excerpts from Carolyn Nguyen (Microsoft, Director of Technology Policy Group)

  • On the individual consumer: “[…] a unique aspect of the IoT, as far as the individual is concerned, is its potential to revolutionize how individuals will interact with the physical world and enable a seamless integration between the digital and the physical world as never before […] The IoT, with its network of sensors and potential to sense the environment, can help assist individuals and people to make optimized and context-appropriate decisions […] As the individual is increasingly objectified by the quantity of data available about them, it’s important that we have a dialogue today and now, as we are just at the dawn of the IoT, to create a viable, sustainable data ecosystem that is centered on the individual.”
  • On the IoT ecosystem: “Taking a look at the evolution and the emerging data-driven economy, this is how we all started, where a person shares data with another person that they have a good relationship with and can trust that the data won’t be misused. The terminology that I use is that the data is being actively provided to the individual. In the evolution going forward, we evolve from this model to where I share data with an entity for which I receive a service: a store, a bank, a post office. Again, this is usually an entity with whom I either have a good relationship with or know I can trust. And this is true, whether this is in the physical world or in the digital world. So if we evolve this a little bit further, where there is now such an entity may be able to share personal data with other entities, with or without my knowledge. We talk about the terminology, as this data that is being generated or inferred as data that is passively generated about me. In other words, I am not actively involved in this transaction.  So as we move further in the evolution, there is more and more data being shared. And furthermore, it is now also possible that other parties that are in my social network can share data about me. So for example, a friend uploading my photo into the service. In this view, it is already very difficult for an individual to control the collection and distribution of information about me. And traditional control mechanisms such as notice and consent begin to lose meaning, as the individual most often automatically gives consent without a true understanding of how the data is distributed or used.  Moving forward, into the Internet of Things with ubiquitous sensors, the situation is clearly further exacerbated. We’ve already heard about Fitbit, sensors in my shirt, sensors in pants that can tweet out information about me, my car giving out information about potholes in the street, average speed, etc. There are devices in my home that are giving information about activities, temperature, whether I am home or not. Devices in my workspace, as well as devices in a public space. So increasingly, the amount of data that will be generated, as was already mentioned this morning, would be primarily passively collected and generated. It is, however, in the data-driven economy, it is this flow of data that has the potential to create new benefits and new innovations and create a foundation for a new economy. Over-restriction of this flow can restrict the potential value, but lax regulation can clearly harm the individual and violate their rights.”

2)      Excerpts from Eric Lightner (DOE, Director of Smart Grid Task Force)

  • On energy usage data privacy: “So we started a number of, I would say, initiatives around this, centered on the consumer. A couple I will just mention quickly. One is called Green Button and that’s really an effort to standardize the information, the customer usage, the energy usage information that you can have access to through your utility in a standardized format and download that information and use that in different applications. We also stimulated the market by funding some developers of technology to look at, okay, if you have this standardized customer energy use and information, what kind of applications and services could we create around that. So we funded some companies to develop some of those technologies. That sort of gave rise to questions of privacy. Hey, I want to use my information, I want to look at it in a more detailed fashion. I probably want to share it with third parties for additional services to me, what are the privacy implications of that? So we started another initiative called the Voluntary Code of Conduct on Data Privacy.  This is something that is actively ongoing. We are working with utilities and a number of stakeholders to really figure out what sort of — just the baseline of protections and processes that we can put in place across utilities in a voluntary way. Many utilities are regulated by their states and they already have policies and laws about how to handle data, but it’s not consistent across the states, so we really wanted to try to develop a voluntary, consistent practice. So you, as a consumer, would then feel more comfortable about how that information is being used within the utility and what the process is for you to give consent to share that information with third parties of your choice for different products and services.”

3)      Excerpts from Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer)

  • On the current state of the IoT: “And what is at the center of that is this interesting development that, each of these manufacturers is pursuing a model where I build my device, I connect my device to my cloud, my manufacturer-specific cloud, and then I give you, as a consumer, an app for your smart phone. And it begs the question, where this goes. Where does all of this end up? […] If I end up with more apps on my phone to control the physical world than I have on my phone to begin with, to control all of the other stuff, it feels like we’ve failed the consumer in a big way. And so at SmartThings, what we are working on is actually bringing a solution into the middle of this. We’ve created a platform that is targeted at the smart home, initially, and to put in the palm of the consumer’s hand not one app per device, but rather one app. But more importantly, to allow these devices to work together.”
  • On data security and data ownership: “Our things and our data have to be secured. And we, as the consumer or the owner of our things, need to own the data that comes from those things. They are our things, it should be our data. Just because I bought it from a particular manufacturer doesn’t mean it’s their data. It’s my data. That sharing of that data then needs to be contextual […] These systems need to be highly reliable and available and they also need to be open.”

4)      Excerpts from Lee Tien (Electronic Frontier Foundation, Senior Staff Attorney)

  • On IoT privacy considerations:  “I’m not really a cheerleader for the Internet of Things. To me, it raises a huge number of privacy and security issues, to the extent that IoT devices entail ubiquitous collection of large amounts of data about what people do. And I mean, I think that’s the main thing, that what we are talking about is collecting data about people’s activities, and therefore that is always going to raise some very serious privacy issues. […] So with respect to the home, my starting point is probably pretty conventional. As Justice Scalia said in the 2001 Kyllo Thermal Imaging case, in the home, our cases show all details are intimate, because the entire area is held safe from prying government eyes. Now we are not discussing government surveillance today, but I think all consumer privacy, anyone who thinks about the privacy issues thoughtfully, is going to have an eye on what data about household activities or personal activities the government could end up obtaining, either directly from the devices or from IoT providers, whether using legal process or other less savory means.”
  • On smart meter technology:  “Smart meters are a good example. And in California we, along with the Center for Democracy and Technology, helped write very strong FIPPS-based approach to energy usage data that is in the hands of utilities, recognizing in California that there were a lot of serious privacy issues around the granular energy usage data.  I like to use this quote from Siemens in Europe a few years ago where they said, you know, we, Siemens, have the technology to record energy use every minute, second, and microsecond, more or less live. From that, we can infer how many people are in the home, what they do, whether they are upstairs, downstairs, do you have a dog, when do you usually get up, when did you get up this morning, when you have a shower. Masses of private data. And obviously, this is a European perspective, which is especially solicitous of privacy, and yet the ability to make those kinds of inferences from energy usage data is clearly there. Now in the California proceeding, one of the things that we do not do is we do not regulate anything about what the consumer, per se, can or can’t do with the data that they have. Indeed, the whole thing is, right now, very consumer empowerment based, because it is consumer consent that provides the main way that utilities can hand the information off or share it with someone else. […] We also use rules that are modeled after HIPAA business associate type rules, so that downstream recipients of data shared from the utilities are bound in a similar way.”
  • On IoT data security considerations: “I think that you have to worry also about the way that the wireless networking exposes data to interception. We are wary that industries who are moving into this space are not necessarily as mature about the security issues as those as, say, at Microsoft. The relatively cheap or lower grade devices may lack the computing resources or, for economic reasons, there will be less incentive to put good security in them. And fourth, that the security perimeter for IoT devices is actually rather different because, depending on where the endpoint devices are, there may be a higher risk of direct tampering. […] I think that one of the things that is going to be important in this area is also the ability of the consumer to exercise what we at the EFF call the right to tinker or right to repair. I think in the comments, there were some rather interesting points about various kinds of consumer rights that could be built into this area. But I think one of the most important is actually being able to know, inspect your device, and understand them, to know what they do, because transparency is going to be a big problem.”

5)      Excerpts from Craig Heffner (Tactical Network Solutions, Security Researcher)

  • On security of firmware and IoT devices: “And consumer devices typically, they don’t have any security, at least by today’s standards. I mean, you have simple things like vendors leaving backdoors in their products, either because it is something that the developer left in and they just forgot about or maybe they left it in so that when they get a customer support call, they can remote into the system and fix it for them and so it lowers, you know, the time they have to spend doing tech support and things like that. And we are not even dealing with sophisticated types of attacks to break a lot of these systems. I actually teach like a five day class on, you know, breaking embedded systems. And people – that’s why I’m trying to condense five days into five minutes here, but people are astounded at, you know, especially people from the security community who are used to breaking things like Windows and PCs and things like that, they don’t really have experience with embedded devices, are astounded at the lack of security that they have typically. […] They had simple vulnerabilities that anyone in the security community who looked at it would be able to break. And it doesn’t take a lot of technical expertise to do that. And I think the real reason why these exist, why we have these problems in embedded devices is there is no financial incentive to companies to make their devices secure. […] And these are simple things that people may not think of, and may not think through, but they can be very difficult to go back and change, especially in embedded products. Because updating the software, updating the firmware, is not necessarily trivial in many cases.”
  • On the everyday IoT consumer:  “Unfortunately, I don’t think that trying to educate users will get us where we need to be. You know, the mantra for years in computer security has been educate the user, educate the user. Well, guess what? We’ve had security problems for decades. That clearly isn’t working. Users don’t understand the technologies they are dealing with. I hear the term, people always say, people are so technologically — you know, they understand all this technology. No, they don’t. They have a phone with pictures on it and they point at the pictures. That is not understanding technology.”

Do Not Track: How Impending California Law Will Affect All Commercial Web Sites

Do Not Track has found a home in California.  As of September 3rd, California Assembly Bill No. 370 (“AB-370”) sits upon Governor Jerry Brown’s desk awaiting his signature.  Once signed, this bill amends the California Online Privacy Protection Act (“CalOPPA” at Section 22575 of the California Business and Professions Code) and will require commercial website operators that collect personally identifiable information (“PII”) through the Internet to disclose how it responds to Do Not Track (“DNT”) signals.  Most mainstream web browsers have functionality that allows the user to signal her desire to not be tracked.  However, under current federal and state law, websites are not legally required to honor that signal.  While AB-370 does not make honoring a DNT signal a legal requirement, it does aim to inform consumers as to which websites have a practice in place to honor DNT signals.

Attribution:  Electronic Frontier Foundation
Attribution: Electronic Frontier Foundation

Background on the Existing CalOPPA Statute

In 2003, the California Legislature passed CalOPPA.  The law requires operators of “web sites and online services” that collect users’ PII to conspicuously post its privacy policy on its site and comply with the posted policy.  CalOPPA currently requires privacy policies to identify the categories of PII collected, the categories of third-parties with whom that PII may be shared, the process for consumers to review and request changes to his or her PII, the process for notifying users of material changes to the privacy policy, and the effective date of the privacy policy.  An operator has 30 days to comply after receiving notice of noncompliance with the privacy policy posting requirement. Failure to comply with the CalOPPA requirements may result in penalties of up to $2,500 for each violation.

It is important to note, CalOPPA has broad reach.  Virtually all commercial websites fall within its scope for two reasons.  First, it is hard to imagine any commercial website not collecting PII, which (for the purposes of CalOPPA) is defined under Section 22577 as “individually identifiable information about an individual consumer collected online by the operator from that individual and maintained by the operator in an accessible form, including […] (1) a first and last name, (2) a home or other physical address, including street name and name of a city or town, (3) an e-mail address, (4) a telephone number, (5) a social security number, (6) any other identifier that permits the physical or online contacting of a specific individual, or (7) information concerning a user that the Web site or online service collects online from the user and maintains in personally identifiable form in combination with an identifier described in this subdivision.”  Second, even though this is a California law, it applies to any website that collects PII from consumers residing in California.  As such, CalOPPA (including the AB-370 revisions) has a de facto nationwide reach.

The Need to Amend CalOPPA (via AB-370)

The impetus for introducing AB-370 is the growing concern over online tracking, which is also referred to as online behavioral targeting.  According to the good folks at Wikipedia,

“When a consumer visits a web site, the pages they visit, the amount of time they view each page, the links they click on, the searches they make and the things that they interact with, allow sites to collect that data, and other factors, create a ‘profile’ that links to that visitor’s web browser. As a result, site publishers can use this data to create defined audience segments based upon visitors that have similar profiles. When visitors return to a specific site or a network of sites using the same web browser, those profiles can be used to allow advertisers to position their online ads in front of those visitors who exhibit a greater level of interest and intent for the products and services being offered. On the theory that properly targeted ads will fetch more consumer interest, the publisher (or seller) can charge a premium for these ads over random advertising or ads based on the context of a site.”

And, by many accounts, the practice of online behavioral targeting is on the rise. Last year, the Wall Street Journal featured an article describing user-tailored advertising and the explosive demand for web-browser collected consumer data.  One practice is online auctions of consumer web browser data. The article notes that “[d]espite rising privacy concerns, the online industry’s data-collection efforts have expanded in the past few years. One reason is the popularity of online auctions, where advertisers buy data about users’ Web browsing. Krux [which sells a service for website publishers to protect their customer data] estimated that such auctions, known as real-time bidding exchanges, contribute to 40% of online data collection.”  The article tells of one study, where the average visit to a webpage triggered 56 instances of data collection.

And, so, here we have AB-370 to the rescue.  According to the bill’s author, Assemblyman Al Muratsuchi, AB-370 “would increase consumer awareness of the practice of online tracking by websites and online services, […] [which] will allow the consumer to make an informed decision about their use of the website or service.”

CalOPPA After AB-370

In addition to the requirements under the existing law outlined above, the amended CalOPPA will:

1)      Require an operator’s privacy policies to disclose how it responds to web browser DNT signals or “other mechanisms that provide consumers the ability to exercise choice regarding the collection of PII about an individual consumer’s online activities over time and across third-party Web sites or online services”; provided the operator engages in PII data collection;

2)      Require an operator’s privacy policies to disclose whether third parties may collect PII about an individual consumer’s online activities over time and across different Web sites when a consumer uses the operator’s site; and

3)      Permit an operator to satisfy the response disclosure requirement for DNT signals by providing a clear and conspicuous hyperlink in the privacy policy to an online location containing a description, including the effects, of any program or protocol the operator follows that offers the consumer that choice.

For all the non-techies out there, it may be useful to quickly explain how Do Not Track technology works.  It is actually relatively simple.  In practice, a consumer wishing to communicate a DNT signal to sites she is visiting would generally do so via her web browser controls.  By changing the setting in her browser properties, the browser enables the HTTP header field (known as the “DNT Header”) that requests that a web application disable its tracking of an individual user.  The header field name is DNT and it accepts three values: “1” in case the user does not want to be tracked (opt out), “0” in case the user consents to being tracked (opt in), or “null” (no header sent) if the user has not expressed a preference. The default behavior required by the standard is not to send the header (i.e., null value), until the user chooses to enable the setting via their browser.

Implications of AB-370

Before going into the implications of the bill, it should be made clear what AB-370 is not.  One thing that the text of the bill and supporting commentary make clear is that AB-370 is not a Do Not Track law.  Back in March 2012, the FTC finalized the “Protecting Consumer Privacy in an Era of Rapid Change” report, in which the FTC endorsed the implementation of a Do Not Track system.  The report is not a regulation and, as such, there remains (even after AB-370 is signed into law) no legal requirement for sites to honor the headers.

In contrast, AB-370 is a disclosure law.  Its aim is to promote transparency.  The logic goes something like this:  If a privacy policy discloses how an operator handles a Do Not Track signal from a browser, then individual consumers will make informed decisions about their use of the site or the service.  As the California Attorney General’s Office put it, “AB-370 is a transparency proposal, not a Do Not Track proposal. When a privacy policy discloses whether or not an operator honors a Do Not Track signal from a browser, individuals may make informed decisions about their use of the site or service.”

What Remains to Be Seen Through AB-370 Transparency

While on the surface of the bill, the disclosure requirement might seem simple.  However, the next logical question is “but, how exactly?”  Despite the best efforts of industry consortiums, such as the World Wide Web Consortium (W3C), there is still no clear consensus on how to handle DNT signals.  Even less clear is how best to handle DNT signals in the face of third-party tracking on the operator’s site.  So, by extension, how best to disclose the operator’s handling of DNT signals is likewise unclear.  Until an industry practice becomes standardized, the best way forward has to be for the operator of the site to simply (but, extremely accurately) state how it responds to the DNT Header.  By way of example, this could perhaps be achieved by adding the following sentence to the operator’s privacy policy:

  • If Operator Doesn’t Recognize Do Not Track Signals: “This Site does not receive or respond to the DNT Header”
  • If Operator Does Recognize Do Not Track Signals: “This Site receives the DNT Header and responds to a DNT:1 value by … {fill in the blank with how data collection by the operator and/or its third-parties is impacted}“

Lastly, even though AB-370 is a disclosure law and not a legal requirement to honor DNT signals, the practical effect could leave little distinction.  The Consumer Watchdog predicts, albeit somewhat cautiously, that “requiring transparency could well prompt companies to compete based on their privacy practices [and] will likely prompt more companies to honor Do Not Track requests […]”.  How website operators react to the full transparency impact of AB-370 will be interesting to see! (Pun entirely intended)

comScore: A Lesson in Unauthorized Use of Consumers’ Data

Last week, the Seventh Circuit upheld a lower court’s class certification in the case of Harris v. comScore, Inc.  Although issued without opinion, the Seventh Circuit’s refusal to reverse the District Court’s certification should signal to online marketing and analytics firms that there may be significant exposure related to consumer data collection.

Public Domain
Public Domain (“Big Data”)

The comScore class action suit was based on violations of the Stored Communications Act (“SCA” at 18 U.S.C. § 2701(a)(1), (2)), the Electronic Communications Privacy Act (“ECPA” at 18 U.S.C. § 2511(1)(a), (d)), the Computer Fraud and Abuse Act (“CFAA” at 18 U.S.C. § 1030(a)(2)(C)), and common law unjust enrichment.

The complaint alleged that comScore improperly obtained and used consumers’ personal information after they downloaded and installed comScore’s software.  The software at issue here is called OSSProxy.  Once installed on a computer, OSSProxy constantly collects data about the user’s computer activity and sends that data back to comScore’s servers.  Depending on how cognizant you are of data collection software and current practices, the following may or may not shock you:

“The OSSProxy software collects a variety of information about a consumer’s computer, including the names of every file on the computer, information entered into a web browser, including passwords and other confidential information, and the contents of PDF files.”

OSSProxy was installed on millions of computers between 2008 and 2011.  To accomplish this, comScore distributes its OSSProxy software through cooperation with third-party providers (appropriately referred to as “bundlers”) who distribute free digital products to consumers online.  Upon downloading the bundlers’ free software, the consumer is prompted to download OSSProxy.  The prompt includes a “Download Statement” and, at least in some cases, a link to comScore’s User License Agreement (ULA).   OSSProxy downloads and installs on a consumer’s computer only after the consumer checks “Accept.”  The bundler’s free digital product downloads and installs even if the consumer “Rejects” the OSSProxy terms, although that fact is confusingly unapparent to an average consumer.

A critical common question among putative class members was whether comScore exceeded the scope of the consent it received from consumers.  As reproduced in the District Court opinion, the Downloading Statement reads in relevant part as follows:

“In order to provide this free download, RelevantKnowledge software, provided by TMRG, Inc., a comScore, Inc. company, is included in this download. This software allows millions of participants in an online market research community to voice their opinions by allowing their online browsing and purchasing behavior to be monitored, collected, aggregated, and once anonymized, used to generate market reports which our clients use to understand Internet trends and patterns and other market research purposes. The information which is monitored and collected includes internet usage information, basic demographic information, certain hardware, software, computer configuration and application usage information about the computer on which you install RelevantKnowledge. We may use the information that we monitor, such as name and address, to better understand your household demographics; for example, we may combine the information that you provide us with additional information from consumer data brokers and other data sources in accordance with our privacy policy. We make commercially viable efforts to automatically filter confidential personally identifiable information and to purge our databases of such information about our panelists when inadvertently collected. By clicking Accept you acknowledge that you are 18 years of age or older, an authorized user of the computer on which you are installing this application, and that you have read, agreed to, and have obtained the consent of all computer and TV users to the terms and conditions of the Privacy Statement and User License Agreement.”

After quickly dismissing the unjust enrichment claim as inappropriate for class action treatment, the Court allowed the claims based on three federal statutes that provide protection against the unauthorized data collection from the plaintiffs’ computers. Each of the three statutes provides an exception to liability if the person obtaining the information has the consent of the computer user.

The plaintiffs alleged that comScore exceeded the scope of their consent to monitoring in the ULA (as incorporated via the Downloading Statement) by:

1)      “fuzzifying” or “obscuring” confidential information collected, rather than automatically filtering that information;

2)      failing to “make commercially viable efforts to purge” confidential information that it does collect from its database;

3)      intercepting phone numbers, social security numbers, user names, passwords, bank account numbers, credit card numbers, and other demographic information;

4)      intercepting the previous 25 websites accessed by a consumer before installation of comScore’s software, the names of every file on the consumer’s computer, the contents of iPod playlists on the computer, the web browsing history of smartphones synced with the computer, and portions of every PDF viewed by the user during web browsing sessions; and

5)      selling the data collected from the consumer’s computer.

Specifically, the Stored Communications Act (SCA) provides a private action against any person who intentionally accesses without authorization a facility through which an electronic communication service is provided or intentionally exceeds an authorization to access that facility; and thereby obtains, alters, or prevents authorized access to a wire or electronic communication while it is in electronic storage in such system. The Electronic Communications Privacy Act (ECPA) provides the same with respect to any person who intentionally intercepts, endeavors to intercept, or procures any other person to intercept or endeavor to intercept, any wire, oral, or electronic communication, or intentionally uses, or endeavors to use, the contents of any wire, oral, or electronic communication, knowing or having reason to know that the information was obtained through the interception of a wire, oral, or electronic communication.  Finally, the CFAA creates a private right of action against any person who intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains information from any protected computer.

The Court concluded that the class action requirements under Federal Rule of Civil Procedure 23(a) (i.e., numerosity, commonality, typicality, adequacy of representation, and ascertainability), as well as the requirements under Rule 23(b)(3) of predominance and superiority were all met.  Rule 23(b)(3) provides that a class action may be maintained only if “the court finds that the questions of law or fact common to class members predominate over any questions affecting only individual members, and that a class action is superior to other available methods for fairly and efficiently adjudicating the controversy.”  As to this “predominance and superiority” requirement, the Court was not moved by comScore’s assertion that class certification should be precluded due to the issue of whether each individual plaintiff suffered damage or loss from comScore’s actions.  As the Court stated,

“That argument has no applicability to the ECPA or SCA claims, both of which provide for statutory damages. The CFAA is different, however, in that it grants a civil action only to “[a]ny person who suffers damage or loss.”  [Nevertheless], the Seventh Circuit has recently reiterated that individual factual damages issues do not provide a reason to deny class certification when the harm to each plaintiff is too small to justify resolving the suits individually.”

The lesson here for businesses is to make sure their terms of service, privacy policies, license agreements, website/mobile app terms of use, etc. accurately reflect their actual practices regarding collection and use of customers’ data.  As companies increasingly leverage “big data” (whether as direct marketing firms or indirectly through outsourced analytics providers), the adequacy of the notice and consent obtained from customers might just be the single most important factor in avoiding a costly, high-profile class action lawsuit.