Category Archives: Internet Law

Privacy and the Internet of Things – FTC Workshop (The Smart Home)

Attribution: Vovastepanov
Attribution: Vovastepanov

The Federal Trade Commission (“FTC”) held a public workshop on November 19, 2013 to explore consumer privacy and security issues related to the Internet of Things (“IoT”).  After briefly describing what the IoT is and the intended focus of the FTC workshop, this post will highlight excerpts from the panelists who spoke at the workshop.   I will limit this recap to the first part of the workshop that dealt with the home-based IoT (i.e., the “Smart Home”).  For a full transcript of the entire workshop, you can find that here.

The “Internet of Things” is a term used to describe the system made up of devices, each of which is enabled to communicate with other devices within an integrated information network.  Or, to use the Wikipedia definition, the “Internet of Things refers to uniquely identifiable objects and their virtual representations in an Internet-like structure.”  Often, we hear the individual devices within the IoT ecosystem labeled as “smart” devices.  Smart devices generally have the ability to communicate with consumers, transmit data back to companies, and compile data for third parties.

According to the FTC, “the workshop focused on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”  The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world.

Following the workshop, the FTC published questions and requested public comments by January 10th on issues raised at the workshop, including:

  • How can consumers benefit from the Internet of Things?
  • What are the unique privacy and security concerns and solutions associated with the Internet of Things?
  • What existing security technologies and practices could businesses and consumers use to enhance privacy and security in the Internet of Things?
  • What is the role of the Fair Information Practice Principles in the Internet of Things?
  • What steps can companies take (before putting a product or service on the market) to prevent connected devices from becoming targets of, or vectors for, malware or adware?
  • How can companies provide effective notice and choice?  If there are circumstances where effective notice and choice aren’t possible, what solutions are available to protect consumers?
  • What new challenges does constant, passive data-collection pose?
  • What effect does the Internet of Things have on data de-identification or anonymization?
  • How can privacy and security risks be weighed against potential societal benefits (such as improved health-care decision-making or energy efficiency) for consumers and businesses?
  • How can companies update device software for security purposes or patch security vulnerabilities in connected devices, particularly if they do not have an ongoing relationship with the consumer?  Do companies have adequate incentives to provide updates or patches over products’ lifecycles?
  • How should the FTC encourage innovation in this area while protecting consumers’ privacy and the security of their data?
  • Are new use-restrictions necessary to protect consumers’ privacy?
  • How could shifting social norms be taken into account?
  • How can consumers learn more about the security and privacy of specific products or services?
  • How can consumers or researchers with insight into vulnerabilities best reach companies?

Panelist Excerpts

Here are my favorite excerpts from the “Smart Home” panel, which was comprised of Carolyn Nguyen (Microsoft, Director of Technology Policy Group), Eric Lightner (DOE, PM of Advanced Technology Development), Michael Beyerle (GE Appliances, Manager of Marketing), Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer), Lee Tien (EFF, Senior Staff Attorney), and Craig Heffner (Tactical Network Solutions, Security Researcher).

1)      Excerpts from Carolyn Nguyen (Microsoft, Director of Technology Policy Group)

  • On the individual consumer: “[…] a unique aspect of the IoT, as far as the individual is concerned, is its potential to revolutionize how individuals will interact with the physical world and enable a seamless integration between the digital and the physical world as never before […] The IoT, with its network of sensors and potential to sense the environment, can help assist individuals and people to make optimized and context-appropriate decisions […] As the individual is increasingly objectified by the quantity of data available about them, it’s important that we have a dialogue today and now, as we are just at the dawn of the IoT, to create a viable, sustainable data ecosystem that is centered on the individual.”
  • On the IoT ecosystem: “Taking a look at the evolution and the emerging data-driven economy, this is how we all started, where a person shares data with another person that they have a good relationship with and can trust that the data won’t be misused. The terminology that I use is that the data is being actively provided to the individual. In the evolution going forward, we evolve from this model to where I share data with an entity for which I receive a service: a store, a bank, a post office. Again, this is usually an entity with whom I either have a good relationship with or know I can trust. And this is true, whether this is in the physical world or in the digital world. So if we evolve this a little bit further, where there is now such an entity may be able to share personal data with other entities, with or without my knowledge. We talk about the terminology, as this data that is being generated or inferred as data that is passively generated about me. In other words, I am not actively involved in this transaction.  So as we move further in the evolution, there is more and more data being shared. And furthermore, it is now also possible that other parties that are in my social network can share data about me. So for example, a friend uploading my photo into the service. In this view, it is already very difficult for an individual to control the collection and distribution of information about me. And traditional control mechanisms such as notice and consent begin to lose meaning, as the individual most often automatically gives consent without a true understanding of how the data is distributed or used.  Moving forward, into the Internet of Things with ubiquitous sensors, the situation is clearly further exacerbated. We’ve already heard about Fitbit, sensors in my shirt, sensors in pants that can tweet out information about me, my car giving out information about potholes in the street, average speed, etc. There are devices in my home that are giving information about activities, temperature, whether I am home or not. Devices in my workspace, as well as devices in a public space. So increasingly, the amount of data that will be generated, as was already mentioned this morning, would be primarily passively collected and generated. It is, however, in the data-driven economy, it is this flow of data that has the potential to create new benefits and new innovations and create a foundation for a new economy. Over-restriction of this flow can restrict the potential value, but lax regulation can clearly harm the individual and violate their rights.”

2)      Excerpts from Eric Lightner (DOE, Director of Smart Grid Task Force)

  • On energy usage data privacy: “So we started a number of, I would say, initiatives around this, centered on the consumer. A couple I will just mention quickly. One is called Green Button and that’s really an effort to standardize the information, the customer usage, the energy usage information that you can have access to through your utility in a standardized format and download that information and use that in different applications. We also stimulated the market by funding some developers of technology to look at, okay, if you have this standardized customer energy use and information, what kind of applications and services could we create around that. So we funded some companies to develop some of those technologies. That sort of gave rise to questions of privacy. Hey, I want to use my information, I want to look at it in a more detailed fashion. I probably want to share it with third parties for additional services to me, what are the privacy implications of that? So we started another initiative called the Voluntary Code of Conduct on Data Privacy.  This is something that is actively ongoing. We are working with utilities and a number of stakeholders to really figure out what sort of — just the baseline of protections and processes that we can put in place across utilities in a voluntary way. Many utilities are regulated by their states and they already have policies and laws about how to handle data, but it’s not consistent across the states, so we really wanted to try to develop a voluntary, consistent practice. So you, as a consumer, would then feel more comfortable about how that information is being used within the utility and what the process is for you to give consent to share that information with third parties of your choice for different products and services.”

3)      Excerpts from Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer)

  • On the current state of the IoT: “And what is at the center of that is this interesting development that, each of these manufacturers is pursuing a model where I build my device, I connect my device to my cloud, my manufacturer-specific cloud, and then I give you, as a consumer, an app for your smart phone. And it begs the question, where this goes. Where does all of this end up? […] If I end up with more apps on my phone to control the physical world than I have on my phone to begin with, to control all of the other stuff, it feels like we’ve failed the consumer in a big way. And so at SmartThings, what we are working on is actually bringing a solution into the middle of this. We’ve created a platform that is targeted at the smart home, initially, and to put in the palm of the consumer’s hand not one app per device, but rather one app. But more importantly, to allow these devices to work together.”
  • On data security and data ownership: “Our things and our data have to be secured. And we, as the consumer or the owner of our things, need to own the data that comes from those things. They are our things, it should be our data. Just because I bought it from a particular manufacturer doesn’t mean it’s their data. It’s my data. That sharing of that data then needs to be contextual […] These systems need to be highly reliable and available and they also need to be open.”

4)      Excerpts from Lee Tien (Electronic Frontier Foundation, Senior Staff Attorney)

  • On IoT privacy considerations:  “I’m not really a cheerleader for the Internet of Things. To me, it raises a huge number of privacy and security issues, to the extent that IoT devices entail ubiquitous collection of large amounts of data about what people do. And I mean, I think that’s the main thing, that what we are talking about is collecting data about people’s activities, and therefore that is always going to raise some very serious privacy issues. […] So with respect to the home, my starting point is probably pretty conventional. As Justice Scalia said in the 2001 Kyllo Thermal Imaging case, in the home, our cases show all details are intimate, because the entire area is held safe from prying government eyes. Now we are not discussing government surveillance today, but I think all consumer privacy, anyone who thinks about the privacy issues thoughtfully, is going to have an eye on what data about household activities or personal activities the government could end up obtaining, either directly from the devices or from IoT providers, whether using legal process or other less savory means.”
  • On smart meter technology:  “Smart meters are a good example. And in California we, along with the Center for Democracy and Technology, helped write very strong FIPPS-based approach to energy usage data that is in the hands of utilities, recognizing in California that there were a lot of serious privacy issues around the granular energy usage data.  I like to use this quote from Siemens in Europe a few years ago where they said, you know, we, Siemens, have the technology to record energy use every minute, second, and microsecond, more or less live. From that, we can infer how many people are in the home, what they do, whether they are upstairs, downstairs, do you have a dog, when do you usually get up, when did you get up this morning, when you have a shower. Masses of private data. And obviously, this is a European perspective, which is especially solicitous of privacy, and yet the ability to make those kinds of inferences from energy usage data is clearly there. Now in the California proceeding, one of the things that we do not do is we do not regulate anything about what the consumer, per se, can or can’t do with the data that they have. Indeed, the whole thing is, right now, very consumer empowerment based, because it is consumer consent that provides the main way that utilities can hand the information off or share it with someone else. […] We also use rules that are modeled after HIPAA business associate type rules, so that downstream recipients of data shared from the utilities are bound in a similar way.”
  • On IoT data security considerations: “I think that you have to worry also about the way that the wireless networking exposes data to interception. We are wary that industries who are moving into this space are not necessarily as mature about the security issues as those as, say, at Microsoft. The relatively cheap or lower grade devices may lack the computing resources or, for economic reasons, there will be less incentive to put good security in them. And fourth, that the security perimeter for IoT devices is actually rather different because, depending on where the endpoint devices are, there may be a higher risk of direct tampering. […] I think that one of the things that is going to be important in this area is also the ability of the consumer to exercise what we at the EFF call the right to tinker or right to repair. I think in the comments, there were some rather interesting points about various kinds of consumer rights that could be built into this area. But I think one of the most important is actually being able to know, inspect your device, and understand them, to know what they do, because transparency is going to be a big problem.”

5)      Excerpts from Craig Heffner (Tactical Network Solutions, Security Researcher)

  • On security of firmware and IoT devices: “And consumer devices typically, they don’t have any security, at least by today’s standards. I mean, you have simple things like vendors leaving backdoors in their products, either because it is something that the developer left in and they just forgot about or maybe they left it in so that when they get a customer support call, they can remote into the system and fix it for them and so it lowers, you know, the time they have to spend doing tech support and things like that. And we are not even dealing with sophisticated types of attacks to break a lot of these systems. I actually teach like a five day class on, you know, breaking embedded systems. And people – that’s why I’m trying to condense five days into five minutes here, but people are astounded at, you know, especially people from the security community who are used to breaking things like Windows and PCs and things like that, they don’t really have experience with embedded devices, are astounded at the lack of security that they have typically. […] They had simple vulnerabilities that anyone in the security community who looked at it would be able to break. And it doesn’t take a lot of technical expertise to do that. And I think the real reason why these exist, why we have these problems in embedded devices is there is no financial incentive to companies to make their devices secure. […] And these are simple things that people may not think of, and may not think through, but they can be very difficult to go back and change, especially in embedded products. Because updating the software, updating the firmware, is not necessarily trivial in many cases.”
  • On the everyday IoT consumer:  “Unfortunately, I don’t think that trying to educate users will get us where we need to be. You know, the mantra for years in computer security has been educate the user, educate the user. Well, guess what? We’ve had security problems for decades. That clearly isn’t working. Users don’t understand the technologies they are dealing with. I hear the term, people always say, people are so technologically — you know, they understand all this technology. No, they don’t. They have a phone with pictures on it and they point at the pictures. That is not understanding technology.”

Copyright and Web-DVR Broadcasting: The Latest Aereo and FilmOn X Decisions

Two recent decisions have added more discord among federal courts as to the question of whether technology that allows users to record copies of over-the-air broadcasts on remote servers for later web viewing violates broadcasters’ exclusive public performance rights under the Copyright Act.  On October 8, 2013 the US District Court for the District of Massachusetts aligned itself with the Second Circuit, holding that such services do not violate broadcasters’ performance rights in Hearst Stations v. Aereo (Aereo).  Reaching the exact opposite conclusion back on September 5, 2013, was the US District Court for the District of Columbia in Fox Television v. FilmOn X (FilmOn X).

The split among District Courts on this issue will likely lead to further appeals and decisions by the respective Court of Appeals.  Ultimately, the issue could be heard by the Supreme Court in the coming years, depending on whether the Courts of Appeals continue to be similarly split.

Before going to the current state of the law, this post will describe the technology that is at the heart of these copyright disputes by using Aereo as the example platform.

What is Aereo?

According to Aereo’s website, “Aereo is a technology platform that you can use to watch live broadcast television at home or on the go.” A potential Aereo user purchases a subscription from Aereo, which, in exchange, provides the user with a remote, cloud-based DVR to set and watch recordings.  The benefit to users is that the service only requires a compatible internet-enabled device, without the need to purchase antennas, boxes, or cables.  The concept is extremely simple:

Aereo

Once a potential user becomes an Aereo member, the user logs in and is assigned a miniaturized, private, remote antenna and DVR.  Aereo offers technology to give consumers access to their antenna and DVR via a web browser and supported internet-enabled devices.  Once the user has connected to his remote Aereo antenna, the user can then access the Aereo platform to view all major broadcast networks live in HD.  Alternatively, the user can enable their remote DVR to set recordings and watch the broadcasts later whenever the user wants.

How does Aereo Work?

When Aereo decides to enter a particular geographic region or market, it installs an array of mini antennas.  Each of these mini antennas are no larger than the size of a dime.  A large number of mini antennas are aggregated on a circuit board, which also contains other electronic components essential to Aereo’s Internet broadcast system.Antenna

While the antenna may be assigned to an individual user, they are generally available for dynamic allocation by the tuner server.  Essentially, this means that a specific antenna is assigned to one specific individual user only when that user is watching television via Aereo, but is then assigned to a different user when the first user is done.  Nevertheless, no single antenna is used by more than one user at a single time, and all dynamic antennas are shared. The antennas are networked to a tuner router and server, which in turn link to a video encoder. The encoder converts the signals from the antennas into a digital video format for viewing on computers and mobile devices.

When a user selects a channel to watch through Aereo’s web or mobile app, the user’s request is sent to Aereo’s web server. The server sends a command to the tuner router, which then identifies an available antenna and encoder slot.  Once the antenna begins receiving the signal, data for the requested channel flows from the antenna to the antenna router and then to the video encoder, where it is stored on an Aereo remote hard drive in a unique directory created for the specific user.  The data then goes through the distribution endpoint, over the Internet to the web or mobile app for the user’s consumption.

The Dispute

In the recent Aereo case, Hearst claimed that Aereo’s services violate its exclusive rights under Section 106 of the Copyright Act to: (1) publicly perform, (2) reproduce, (3) distribute, and (4) prepare derivative works based on its copyrighted programming.  The Court’s analysis focused on the first claim relating to public performance, which will be discussed below.

The Court quickly rejected Hearst’s claim that Aereo infringed its exclusive right to reproduce its works, stating that “holding a media company directly liable just because it provides technology that enables users to make copies of programming would be the rough equivalent of holding the owner of a copy machine liable because people use the machine to illegally reproduce copyrighted materials.”  Similarly, with respect to its distribution right, the Court sided with Aereo, relying heavily on the fact that Aereo’s technology does not allow users to download (only stream) the copyrighted content.  Likewise, with respect to Hearst’s exclusive right to make derivative works, the Court quickly disposed of Hearst’s argument that by reformatting intercepted programming Aereo violated the broadcaster’s right to prepare derivative works.  As the Court reasoned, “Hearst has presented no legal authority nor is the Court aware of any for the proposition that Aereo’s technology creates a derivative work merely by converting programs from their original digital format to a different digital format compatible with internet streaming.”

Public Performance Right

Quickly dismissing the above claims, the Court focused its discussion on Hearst’s first claim regarding its exclusive right to publicly perform its copyrighted works.  The Copyright Act gives copyright owners of audiovisual works the exclusive right, among others, to “perform the copyrighted work publicly.”  Section 101 of the Act provides that “to perform” an audiovisual work means “to show its images in any sequence or make the sounds accompanying it audible.”  To make matters more confusing, the statute distinguishes between public and private performances.  Having become known as the “Transmit Clause”, Section 101 provides that “to perform a work publicly” means to transmit a performance of the work to the public, “by means of any device or process, whether the members of the public capable of receiving the performance […] receive it in the same place or in separate places and at the same time or at different times.”  For additional insight – or perhaps more confusion – the House Committee on the Judiciary’s Report to the Copyright Act (revised 1976) provides a discussion on the intended meaning of “perform” and “public performance”:

“Concepts of public performance and public display cover not only the initial rendition or showing, but also any further act by which that rendition or showing is transmitted or communicated to the public. Thus, for example: a singer is performing when he or she sings a song; a broadcasting network is performing when it transmits his or her performance (whether simultaneously or from records); a local broadcaster is performing when it transmits the network broadcast; a cable television system is performing when it retransmits the broadcast to its subscribers; and any individual is performing whenever he or she … communicates the performance by turning on a receiving set.”

The Decision

The District Court for the District of Massachusetts in Aereo relied on the Second Circuit’s holding that similar DVR technology does not infringe a copyright holder’s exclusive right to perform its work publicly.  In the 2008, the Second Circuit decided the Cablevision case, which held RS–DVR technology non-infringing of the original broadcaster’s public performance right because the technology’s manner of transmitting a recorded program to the viewer who recorded it did not constitute a public performance.  The Cablevision opinion concluded:

“In sum, we find that the transmit clause directs us to identify the potential audience of a given transmission, i.e., the persons “capable of receiving” it, to determine whether that transmission is made “to the public.” Because each RS–DVR playback transmission is made to a single subscriber using a single unique copy produced by that subscriber, we conclude that such transmissions are not performances “to the public,” and therefore do not infringe any exclusive right of public performance.”

Likewise, earlier this year, the Second Circuit applied its reasoning from Cablevision in WNET, Thirteen v. Aereo (WNET) to the very Aereo service that was before the District of Massachusetts Court.  In WNET, the Second Circuit affirmed their Cablevision decision and found that Aereo’s transmissions to subscribers also did not infringe.  The court described Cablevision’s holding as resting on two essential facts:

1)      The RS–DVR system created unique copies of each program a customer wished to record; and

2)      A customer could only view the unique copy that was generated on his behalf.

Adopting the Second Circuit’s rationale, the Massachusetts court found that Aereo’s system is consistent with the two key factors above because it (1) employs individually-assigned antennas to create copies unique to each user and (2) only at the user’s request.

The Split: FilmOn X

Not persuaded by the Second Circuit’s Cablevision or WNET decisions, the District Court for the District of Columbia came out the other way in the FilmOn X case.  Quite simply, its decision rested on a different, and not unreasonable, statutory interpretation of the Transmit Clause.  The FilmOn X court reasoned that what makes a transmission public is not the intended audience of any given copy of the program, but the intended audience of the initial broadcast.

At the end of the day, this battle of statutory interpretations may need to be settled by the Supreme Court, or – don’t hold your breath – an Act of Congress.

Imminent Expansion of the Security Breach Notification Law

Back in 2003, California became the first state in the U.S. to pass a security breach notification law. California’s Security Breach Notification Law applies to any business that conducts business in California, which of course means that the law reaches nearly all companies that have an e-commerce presence.  In a nut shell, the statute requires businesses to notify California residents when the security of such residents’ personal information has been breached.  The rationale behind the law is that breach notification ensures that residents become aware of a breach, thereby allowing them to take actions to mitigate potential financial losses due to fraudulent use of their personal information.

Attribution: Tom Murphy
Attribution: Tom Murphy

Fast forward ten years.  California Attorney General’s specialized eCrime Unit found that increasingly “criminals are targeting Internet Web sites with inadequate security, including some social media Internet Web sites, to harvest email addresses, user names, and passwords,” and “[b]ecause most people do not use unique passwords for each of their accounts, acquiring the information on one account can give a thief access to [many different] accounts.”

And so, on September 10, the California legislature passed and sent to the Governor’s desk a bill that would amend California’s security breach notification law in a significant way.  This is the second bill in as many weeks to reach the Governor’s desk addressing consumer privacy.  Last week it was AB-370, which I discussed here.  This week, it is California Senate Bill 46 (SB-46), which would expand the definition of “personal information” subject to California’s existing security breach disclosure requirements to include “a user name or email address, in combination with a password or security question and answer that permits access to an online account.”  This could have a significant impact, given that notification requirements following a security breach incident depend upon whether the compromised data falls within the definition of “personal information”.

Overview of California’s Security Breach Notification Law

California’s Security Breach Notification Law (Section 1798.82 of the California Civil Code) requires businesses that own or license computerized data consisting of personal information to disclose any breach of the security of the system following discovery of such breach to any resident of California whose unencrypted personal information was believed to be acquired by an unauthorized person.  The triggering event is a “breach of the security of the system”, which means the unauthorized acquisition of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by the business.  Likewise, 1798.82 requires businesses that maintain (but do not own or license) computerized data consisting of personal information to notify the owner or licensee of the information of any associated security breach immediately following the discovery of such breach.

Where a data breach occurs and a business is required to issue a notification, the law requires that the notification be written in plain language, and include (1) the name and contact information of the business, (2) the types of personal information that were believed to have been the subject of a breach, (3) the estimated date, or date range, of the breach, (4) the date of the notice, (5) whether the notification was delayed as a result of a law enforcement investigation, (6) a general description of the breach incident, (7) the toll-free telephone numbers and addresses of the major credit reporting agencies if the breach exposed a social security number or a driver’s license number.  Additionally, at the discretion of the business, the notification may also include information about what the business has done to protect individuals whose information has been breached and advice on steps the individual may take to protect him/herself.

Up until what appears to be the imminent passage of SB-46, the definition of “personal information” meant an individual’s first name or first initial and last name in combination with that individual’s (1) social security number, (2) driver’s license or California ID number, (3) account number, in combination with any required security code, PIN, or password that would permit access to that individual’s financial account, (4) medical information, or (5) health insurance information, when either the name or any of the data elements (1)-(5) are not encrypted.

How SB-46 Amends Section 1798.82

SB-46, if signed by Gov. Jerry Brown, would amend 1798.82 in three notable ways.  First, and probably most significantly, SB-46 would broaden the definition of “personal information” to include “a user name or email address, in combination with a password or security question and answer that would permit access to an online account.”  Unlike the existing data elements (e.g., social security number, medical information, etc.), this new category of personal information does not need to be in combination with the individual’s name to be deemed personal information.

Second, and perhaps in an effort to mitigate the impact that will surely be felt by companies, the bill would provide a streamlined notification process for breaches concerning the new online account information category of personal information.  The streamlined notification process would allow the business to comply with notification requirements by providing the security breach notification in “electronic or other form that directs the person whose personal information has been breached promptly to change his or her password and security question or answer, as applicable, or to take other steps appropriate to protect the online account with the business and all other online accounts for which the person whose personal information has been breached uses the same user name or email address and password or security question or answer.”

Third, the bill would create a variation on the streamlined notification process for breaches concerning login credentials of an email account that is furnished by the business.   For these businesses (i.e., email service providers) the business must provide notice by the traditional method required under the current notification requirements (i.e., non-streamlined) or “by clear and conspicuous notice delivered to the resident online when the resident is connected to the online account from an Internet Protocol address or online location from which the business knows the resident customarily accesses the account.”

Certainly, with the occurrence of data breaches on the rise, and while usernames/email addresses and passwords are commonly collected by companies with an eCommerce or social network presence, the additional category of personal information introduced by SB-46 will have a compounding effect on companies’ notification obligations.  Companies, going forward, would be wise to put together a strategy to treat usernames/emails in combination with passwords (or security questions/answers) just as they would a person’s name in combination of a social security number under their existing information security policies.

Do Not Track: How Impending California Law Will Affect All Commercial Web Sites

Do Not Track has found a home in California.  As of September 3rd, California Assembly Bill No. 370 (“AB-370”) sits upon Governor Jerry Brown’s desk awaiting his signature.  Once signed, this bill amends the California Online Privacy Protection Act (“CalOPPA” at Section 22575 of the California Business and Professions Code) and will require commercial website operators that collect personally identifiable information (“PII”) through the Internet to disclose how it responds to Do Not Track (“DNT”) signals.  Most mainstream web browsers have functionality that allows the user to signal her desire to not be tracked.  However, under current federal and state law, websites are not legally required to honor that signal.  While AB-370 does not make honoring a DNT signal a legal requirement, it does aim to inform consumers as to which websites have a practice in place to honor DNT signals.

Attribution:  Electronic Frontier Foundation
Attribution: Electronic Frontier Foundation

Background on the Existing CalOPPA Statute

In 2003, the California Legislature passed CalOPPA.  The law requires operators of “web sites and online services” that collect users’ PII to conspicuously post its privacy policy on its site and comply with the posted policy.  CalOPPA currently requires privacy policies to identify the categories of PII collected, the categories of third-parties with whom that PII may be shared, the process for consumers to review and request changes to his or her PII, the process for notifying users of material changes to the privacy policy, and the effective date of the privacy policy.  An operator has 30 days to comply after receiving notice of noncompliance with the privacy policy posting requirement. Failure to comply with the CalOPPA requirements may result in penalties of up to $2,500 for each violation.

It is important to note, CalOPPA has broad reach.  Virtually all commercial websites fall within its scope for two reasons.  First, it is hard to imagine any commercial website not collecting PII, which (for the purposes of CalOPPA) is defined under Section 22577 as “individually identifiable information about an individual consumer collected online by the operator from that individual and maintained by the operator in an accessible form, including […] (1) a first and last name, (2) a home or other physical address, including street name and name of a city or town, (3) an e-mail address, (4) a telephone number, (5) a social security number, (6) any other identifier that permits the physical or online contacting of a specific individual, or (7) information concerning a user that the Web site or online service collects online from the user and maintains in personally identifiable form in combination with an identifier described in this subdivision.”  Second, even though this is a California law, it applies to any website that collects PII from consumers residing in California.  As such, CalOPPA (including the AB-370 revisions) has a de facto nationwide reach.

The Need to Amend CalOPPA (via AB-370)

The impetus for introducing AB-370 is the growing concern over online tracking, which is also referred to as online behavioral targeting.  According to the good folks at Wikipedia,

“When a consumer visits a web site, the pages they visit, the amount of time they view each page, the links they click on, the searches they make and the things that they interact with, allow sites to collect that data, and other factors, create a ‘profile’ that links to that visitor’s web browser. As a result, site publishers can use this data to create defined audience segments based upon visitors that have similar profiles. When visitors return to a specific site or a network of sites using the same web browser, those profiles can be used to allow advertisers to position their online ads in front of those visitors who exhibit a greater level of interest and intent for the products and services being offered. On the theory that properly targeted ads will fetch more consumer interest, the publisher (or seller) can charge a premium for these ads over random advertising or ads based on the context of a site.”

And, by many accounts, the practice of online behavioral targeting is on the rise. Last year, the Wall Street Journal featured an article describing user-tailored advertising and the explosive demand for web-browser collected consumer data.  One practice is online auctions of consumer web browser data. The article notes that “[d]espite rising privacy concerns, the online industry’s data-collection efforts have expanded in the past few years. One reason is the popularity of online auctions, where advertisers buy data about users’ Web browsing. Krux [which sells a service for website publishers to protect their customer data] estimated that such auctions, known as real-time bidding exchanges, contribute to 40% of online data collection.”  The article tells of one study, where the average visit to a webpage triggered 56 instances of data collection.

And, so, here we have AB-370 to the rescue.  According to the bill’s author, Assemblyman Al Muratsuchi, AB-370 “would increase consumer awareness of the practice of online tracking by websites and online services, […] [which] will allow the consumer to make an informed decision about their use of the website or service.”

CalOPPA After AB-370

In addition to the requirements under the existing law outlined above, the amended CalOPPA will:

1)      Require an operator’s privacy policies to disclose how it responds to web browser DNT signals or “other mechanisms that provide consumers the ability to exercise choice regarding the collection of PII about an individual consumer’s online activities over time and across third-party Web sites or online services”; provided the operator engages in PII data collection;

2)      Require an operator’s privacy policies to disclose whether third parties may collect PII about an individual consumer’s online activities over time and across different Web sites when a consumer uses the operator’s site; and

3)      Permit an operator to satisfy the response disclosure requirement for DNT signals by providing a clear and conspicuous hyperlink in the privacy policy to an online location containing a description, including the effects, of any program or protocol the operator follows that offers the consumer that choice.

For all the non-techies out there, it may be useful to quickly explain how Do Not Track technology works.  It is actually relatively simple.  In practice, a consumer wishing to communicate a DNT signal to sites she is visiting would generally do so via her web browser controls.  By changing the setting in her browser properties, the browser enables the HTTP header field (known as the “DNT Header”) that requests that a web application disable its tracking of an individual user.  The header field name is DNT and it accepts three values: “1” in case the user does not want to be tracked (opt out), “0” in case the user consents to being tracked (opt in), or “null” (no header sent) if the user has not expressed a preference. The default behavior required by the standard is not to send the header (i.e., null value), until the user chooses to enable the setting via their browser.

Implications of AB-370

Before going into the implications of the bill, it should be made clear what AB-370 is not.  One thing that the text of the bill and supporting commentary make clear is that AB-370 is not a Do Not Track law.  Back in March 2012, the FTC finalized the “Protecting Consumer Privacy in an Era of Rapid Change” report, in which the FTC endorsed the implementation of a Do Not Track system.  The report is not a regulation and, as such, there remains (even after AB-370 is signed into law) no legal requirement for sites to honor the headers.

In contrast, AB-370 is a disclosure law.  Its aim is to promote transparency.  The logic goes something like this:  If a privacy policy discloses how an operator handles a Do Not Track signal from a browser, then individual consumers will make informed decisions about their use of the site or the service.  As the California Attorney General’s Office put it, “AB-370 is a transparency proposal, not a Do Not Track proposal. When a privacy policy discloses whether or not an operator honors a Do Not Track signal from a browser, individuals may make informed decisions about their use of the site or service.”

What Remains to Be Seen Through AB-370 Transparency

While on the surface of the bill, the disclosure requirement might seem simple.  However, the next logical question is “but, how exactly?”  Despite the best efforts of industry consortiums, such as the World Wide Web Consortium (W3C), there is still no clear consensus on how to handle DNT signals.  Even less clear is how best to handle DNT signals in the face of third-party tracking on the operator’s site.  So, by extension, how best to disclose the operator’s handling of DNT signals is likewise unclear.  Until an industry practice becomes standardized, the best way forward has to be for the operator of the site to simply (but, extremely accurately) state how it responds to the DNT Header.  By way of example, this could perhaps be achieved by adding the following sentence to the operator’s privacy policy:

  • If Operator Doesn’t Recognize Do Not Track Signals: “This Site does not receive or respond to the DNT Header”
  • If Operator Does Recognize Do Not Track Signals: “This Site receives the DNT Header and responds to a DNT:1 value by … {fill in the blank with how data collection by the operator and/or its third-parties is impacted}“

Lastly, even though AB-370 is a disclosure law and not a legal requirement to honor DNT signals, the practical effect could leave little distinction.  The Consumer Watchdog predicts, albeit somewhat cautiously, that “requiring transparency could well prompt companies to compete based on their privacy practices [and] will likely prompt more companies to honor Do Not Track requests […]”.  How website operators react to the full transparency impact of AB-370 will be interesting to see! (Pun entirely intended)