Category Archives: Data Security

Privacy and the Internet of Things – FTC Workshop (The Smart Home)

Attribution: Vovastepanov
Attribution: Vovastepanov

The Federal Trade Commission (“FTC”) held a public workshop on November 19, 2013 to explore consumer privacy and security issues related to the Internet of Things (“IoT”).  After briefly describing what the IoT is and the intended focus of the FTC workshop, this post will highlight excerpts from the panelists who spoke at the workshop.   I will limit this recap to the first part of the workshop that dealt with the home-based IoT (i.e., the “Smart Home”).  For a full transcript of the entire workshop, you can find that here.

The “Internet of Things” is a term used to describe the system made up of devices, each of which is enabled to communicate with other devices within an integrated information network.  Or, to use the Wikipedia definition, the “Internet of Things refers to uniquely identifiable objects and their virtual representations in an Internet-like structure.”  Often, we hear the individual devices within the IoT ecosystem labeled as “smart” devices.  Smart devices generally have the ability to communicate with consumers, transmit data back to companies, and compile data for third parties.

According to the FTC, “the workshop focused on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”  The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world.

Following the workshop, the FTC published questions and requested public comments by January 10th on issues raised at the workshop, including:

  • How can consumers benefit from the Internet of Things?
  • What are the unique privacy and security concerns and solutions associated with the Internet of Things?
  • What existing security technologies and practices could businesses and consumers use to enhance privacy and security in the Internet of Things?
  • What is the role of the Fair Information Practice Principles in the Internet of Things?
  • What steps can companies take (before putting a product or service on the market) to prevent connected devices from becoming targets of, or vectors for, malware or adware?
  • How can companies provide effective notice and choice?  If there are circumstances where effective notice and choice aren’t possible, what solutions are available to protect consumers?
  • What new challenges does constant, passive data-collection pose?
  • What effect does the Internet of Things have on data de-identification or anonymization?
  • How can privacy and security risks be weighed against potential societal benefits (such as improved health-care decision-making or energy efficiency) for consumers and businesses?
  • How can companies update device software for security purposes or patch security vulnerabilities in connected devices, particularly if they do not have an ongoing relationship with the consumer?  Do companies have adequate incentives to provide updates or patches over products’ lifecycles?
  • How should the FTC encourage innovation in this area while protecting consumers’ privacy and the security of their data?
  • Are new use-restrictions necessary to protect consumers’ privacy?
  • How could shifting social norms be taken into account?
  • How can consumers learn more about the security and privacy of specific products or services?
  • How can consumers or researchers with insight into vulnerabilities best reach companies?

Panelist Excerpts

Here are my favorite excerpts from the “Smart Home” panel, which was comprised of Carolyn Nguyen (Microsoft, Director of Technology Policy Group), Eric Lightner (DOE, PM of Advanced Technology Development), Michael Beyerle (GE Appliances, Manager of Marketing), Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer), Lee Tien (EFF, Senior Staff Attorney), and Craig Heffner (Tactical Network Solutions, Security Researcher).

1)      Excerpts from Carolyn Nguyen (Microsoft, Director of Technology Policy Group)

  • On the individual consumer: “[…] a unique aspect of the IoT, as far as the individual is concerned, is its potential to revolutionize how individuals will interact with the physical world and enable a seamless integration between the digital and the physical world as never before […] The IoT, with its network of sensors and potential to sense the environment, can help assist individuals and people to make optimized and context-appropriate decisions […] As the individual is increasingly objectified by the quantity of data available about them, it’s important that we have a dialogue today and now, as we are just at the dawn of the IoT, to create a viable, sustainable data ecosystem that is centered on the individual.”
  • On the IoT ecosystem: “Taking a look at the evolution and the emerging data-driven economy, this is how we all started, where a person shares data with another person that they have a good relationship with and can trust that the data won’t be misused. The terminology that I use is that the data is being actively provided to the individual. In the evolution going forward, we evolve from this model to where I share data with an entity for which I receive a service: a store, a bank, a post office. Again, this is usually an entity with whom I either have a good relationship with or know I can trust. And this is true, whether this is in the physical world or in the digital world. So if we evolve this a little bit further, where there is now such an entity may be able to share personal data with other entities, with or without my knowledge. We talk about the terminology, as this data that is being generated or inferred as data that is passively generated about me. In other words, I am not actively involved in this transaction.  So as we move further in the evolution, there is more and more data being shared. And furthermore, it is now also possible that other parties that are in my social network can share data about me. So for example, a friend uploading my photo into the service. In this view, it is already very difficult for an individual to control the collection and distribution of information about me. And traditional control mechanisms such as notice and consent begin to lose meaning, as the individual most often automatically gives consent without a true understanding of how the data is distributed or used.  Moving forward, into the Internet of Things with ubiquitous sensors, the situation is clearly further exacerbated. We’ve already heard about Fitbit, sensors in my shirt, sensors in pants that can tweet out information about me, my car giving out information about potholes in the street, average speed, etc. There are devices in my home that are giving information about activities, temperature, whether I am home or not. Devices in my workspace, as well as devices in a public space. So increasingly, the amount of data that will be generated, as was already mentioned this morning, would be primarily passively collected and generated. It is, however, in the data-driven economy, it is this flow of data that has the potential to create new benefits and new innovations and create a foundation for a new economy. Over-restriction of this flow can restrict the potential value, but lax regulation can clearly harm the individual and violate their rights.”

2)      Excerpts from Eric Lightner (DOE, Director of Smart Grid Task Force)

  • On energy usage data privacy: “So we started a number of, I would say, initiatives around this, centered on the consumer. A couple I will just mention quickly. One is called Green Button and that’s really an effort to standardize the information, the customer usage, the energy usage information that you can have access to through your utility in a standardized format and download that information and use that in different applications. We also stimulated the market by funding some developers of technology to look at, okay, if you have this standardized customer energy use and information, what kind of applications and services could we create around that. So we funded some companies to develop some of those technologies. That sort of gave rise to questions of privacy. Hey, I want to use my information, I want to look at it in a more detailed fashion. I probably want to share it with third parties for additional services to me, what are the privacy implications of that? So we started another initiative called the Voluntary Code of Conduct on Data Privacy.  This is something that is actively ongoing. We are working with utilities and a number of stakeholders to really figure out what sort of — just the baseline of protections and processes that we can put in place across utilities in a voluntary way. Many utilities are regulated by their states and they already have policies and laws about how to handle data, but it’s not consistent across the states, so we really wanted to try to develop a voluntary, consistent practice. So you, as a consumer, would then feel more comfortable about how that information is being used within the utility and what the process is for you to give consent to share that information with third parties of your choice for different products and services.”

3)      Excerpts from Jeff Hagins (SmartThings, Cofounder and Chief Technology Officer)

  • On the current state of the IoT: “And what is at the center of that is this interesting development that, each of these manufacturers is pursuing a model where I build my device, I connect my device to my cloud, my manufacturer-specific cloud, and then I give you, as a consumer, an app for your smart phone. And it begs the question, where this goes. Where does all of this end up? […] If I end up with more apps on my phone to control the physical world than I have on my phone to begin with, to control all of the other stuff, it feels like we’ve failed the consumer in a big way. And so at SmartThings, what we are working on is actually bringing a solution into the middle of this. We’ve created a platform that is targeted at the smart home, initially, and to put in the palm of the consumer’s hand not one app per device, but rather one app. But more importantly, to allow these devices to work together.”
  • On data security and data ownership: “Our things and our data have to be secured. And we, as the consumer or the owner of our things, need to own the data that comes from those things. They are our things, it should be our data. Just because I bought it from a particular manufacturer doesn’t mean it’s their data. It’s my data. That sharing of that data then needs to be contextual […] These systems need to be highly reliable and available and they also need to be open.”

4)      Excerpts from Lee Tien (Electronic Frontier Foundation, Senior Staff Attorney)

  • On IoT privacy considerations:  “I’m not really a cheerleader for the Internet of Things. To me, it raises a huge number of privacy and security issues, to the extent that IoT devices entail ubiquitous collection of large amounts of data about what people do. And I mean, I think that’s the main thing, that what we are talking about is collecting data about people’s activities, and therefore that is always going to raise some very serious privacy issues. […] So with respect to the home, my starting point is probably pretty conventional. As Justice Scalia said in the 2001 Kyllo Thermal Imaging case, in the home, our cases show all details are intimate, because the entire area is held safe from prying government eyes. Now we are not discussing government surveillance today, but I think all consumer privacy, anyone who thinks about the privacy issues thoughtfully, is going to have an eye on what data about household activities or personal activities the government could end up obtaining, either directly from the devices or from IoT providers, whether using legal process or other less savory means.”
  • On smart meter technology:  “Smart meters are a good example. And in California we, along with the Center for Democracy and Technology, helped write very strong FIPPS-based approach to energy usage data that is in the hands of utilities, recognizing in California that there were a lot of serious privacy issues around the granular energy usage data.  I like to use this quote from Siemens in Europe a few years ago where they said, you know, we, Siemens, have the technology to record energy use every minute, second, and microsecond, more or less live. From that, we can infer how many people are in the home, what they do, whether they are upstairs, downstairs, do you have a dog, when do you usually get up, when did you get up this morning, when you have a shower. Masses of private data. And obviously, this is a European perspective, which is especially solicitous of privacy, and yet the ability to make those kinds of inferences from energy usage data is clearly there. Now in the California proceeding, one of the things that we do not do is we do not regulate anything about what the consumer, per se, can or can’t do with the data that they have. Indeed, the whole thing is, right now, very consumer empowerment based, because it is consumer consent that provides the main way that utilities can hand the information off or share it with someone else. […] We also use rules that are modeled after HIPAA business associate type rules, so that downstream recipients of data shared from the utilities are bound in a similar way.”
  • On IoT data security considerations: “I think that you have to worry also about the way that the wireless networking exposes data to interception. We are wary that industries who are moving into this space are not necessarily as mature about the security issues as those as, say, at Microsoft. The relatively cheap or lower grade devices may lack the computing resources or, for economic reasons, there will be less incentive to put good security in them. And fourth, that the security perimeter for IoT devices is actually rather different because, depending on where the endpoint devices are, there may be a higher risk of direct tampering. […] I think that one of the things that is going to be important in this area is also the ability of the consumer to exercise what we at the EFF call the right to tinker or right to repair. I think in the comments, there were some rather interesting points about various kinds of consumer rights that could be built into this area. But I think one of the most important is actually being able to know, inspect your device, and understand them, to know what they do, because transparency is going to be a big problem.”

5)      Excerpts from Craig Heffner (Tactical Network Solutions, Security Researcher)

  • On security of firmware and IoT devices: “And consumer devices typically, they don’t have any security, at least by today’s standards. I mean, you have simple things like vendors leaving backdoors in their products, either because it is something that the developer left in and they just forgot about or maybe they left it in so that when they get a customer support call, they can remote into the system and fix it for them and so it lowers, you know, the time they have to spend doing tech support and things like that. And we are not even dealing with sophisticated types of attacks to break a lot of these systems. I actually teach like a five day class on, you know, breaking embedded systems. And people – that’s why I’m trying to condense five days into five minutes here, but people are astounded at, you know, especially people from the security community who are used to breaking things like Windows and PCs and things like that, they don’t really have experience with embedded devices, are astounded at the lack of security that they have typically. […] They had simple vulnerabilities that anyone in the security community who looked at it would be able to break. And it doesn’t take a lot of technical expertise to do that. And I think the real reason why these exist, why we have these problems in embedded devices is there is no financial incentive to companies to make their devices secure. […] And these are simple things that people may not think of, and may not think through, but they can be very difficult to go back and change, especially in embedded products. Because updating the software, updating the firmware, is not necessarily trivial in many cases.”
  • On the everyday IoT consumer:  “Unfortunately, I don’t think that trying to educate users will get us where we need to be. You know, the mantra for years in computer security has been educate the user, educate the user. Well, guess what? We’ve had security problems for decades. That clearly isn’t working. Users don’t understand the technologies they are dealing with. I hear the term, people always say, people are so technologically — you know, they understand all this technology. No, they don’t. They have a phone with pictures on it and they point at the pictures. That is not understanding technology.”

The AvMed Data Breach Settlement: What’s it going to Cost?

$3,000,000, based on the recently proposed settlement agreement involving the 2009 AvMed data breach incident.

Compu2

Once finally approved, this settlement would resolve the claims asserted against AvMed and would provide monetary relief to all affected customers, including customers who were not actually victims of an identity theft.  The proposed settlement in this case goes well beyond the credit monitoring offer that typically results from data breach class action settlements.  According to the plaintiffs’ unopposed motion to approve the settlement:

“All told, the Settlement is a tremendous achievement for the Plaintiffs and proposed Settlement Classes, and provides landmark relief that will serve as a model for other companies who face similar lawsuits.”

The Facts

AvMed, Inc. is a Florida-based health insurance provider.  In December 2009, two laptop computers were stolen from AvMed’s conference room.  The laptops contained the unencrypted personally identifiable information (PII) of 1.2 million AvMed customers.  The unencrypted PII consisted of customers’ names, addresses, Social Security numbers, and medical health information.

The Allegations

According to the affected customers, AvMed’s failure to properly secure their PII (in accordance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) standards) resulted in (1) the theft of some affected customers’ identities, and (2) with respect to all affected customers, the overpayment for insurance coverage.

The first claim (i.e., based on customers whose identities were stolen and suffered economic harm as a result) is fairly straight forward and uncontroversial.

The second claim (related to the overpayment of premiums of all affected customers) is a bit more novel.  This second claim is based on an unjust enrichment theory, which the Eleventh Circuit addressed prior to remanding this case back to the district court.  The Eleventh Circuit recognized the customers’ unjust enrichment claim stating that when AvMed charged customers, as part of premium payments, to fund the administrative costs of data security, then AvMed is unjustly enriched to the extent it subsequently fails to implement the data security measures.  This notion is premised on the fact that customers paid monthly premiums to AvMed, a portion of which was presumably allocated to the data security efforts that AvMed promised its customers.  And, of course, AvMed did not implement these promised data security efforts, but nevertheless retained the entirety of the customers’ premiums.  Accordingly, under this theory of unjust enrichment, the customers paid for undelivered services and thus are entitled to partial refunds of their premiums.

The Settlement

Under the terms of the settlement, AvMed agrees to create a $3M settlement fund. Customers who can show that they actually suffered identity theft as a result of the 2009 data breach can make claims to recover monetary losses associated with the identity theft.  Additionally, all affected customers (whether they suffered actual identity theft or not), will be entitled to claim $10 for each year that they paid premiums to AvMed, subject to a cap of $30.  The cash payments available to all affected customers provide reimbursement for the portion of their insurance premiums that AvMed should have allocated to data protection and security.

Additionally, under the settlement, AvMed is required to implement wide-ranging measures to ensure that its customers’ PII are protected, including:

  1. instituting mandatory security awareness and training programs for all company employees
  2. instituting mandatory training on appropriate laptop use and security for all company employees whose employment responsibilities include accessing information stored on company laptop computers
  3. upgrading all company laptop computers with additional security mechanisms, including GPS tracking technology
  4. adopting new password protocols and full disk encryption technology on all company desktops and laptops
  5. installing physical security upgrades at company facilities and offices to further safeguard workstations from theft
  6. reviewing and revising written policies and procedures to enhance information security

For Comparisons Sake

So, just for fun, here’s how this settlement stacks up against some other recent, high-profile data breach settlements:

  • Johansson-Dohrmann v. Cbr Sys., Inc., No. 12-CV-1115 (S.D. Cal. July 24, 2013) – established a $2.5 million fund to provide approximately 300,000 class members with two years of credit monitoring and identity theft reimbursement.
  • Beringer v. Certegy Check Services, Inc., No. 07­cv-01657 (M.D. Fla. Sept. 3, 2008) – established a $5 million fund to provide approximately 37 million class members with up to two years of credit monitoring and identity theft reimbursement.
  • In re Heartland Payment Sys. Inc. Customer Data Sec. Breach Litig., MDL No. 09-2046 (S.D. Tex. 2012) – established a $2.4 million fund from which to provide over 100 million class members with identity theft reimbursement.
  • Rowe v. Unicare Life and Health Ins. Co., No. 09-cv-02286 (N.D. Ill. Sept. 14, 2011) – established a $3 million fund to provide approximately 220,000 class members with one year of credit monitoring and identity theft reimbursement.

Imminent Expansion of the Security Breach Notification Law

Back in 2003, California became the first state in the U.S. to pass a security breach notification law. California’s Security Breach Notification Law applies to any business that conducts business in California, which of course means that the law reaches nearly all companies that have an e-commerce presence.  In a nut shell, the statute requires businesses to notify California residents when the security of such residents’ personal information has been breached.  The rationale behind the law is that breach notification ensures that residents become aware of a breach, thereby allowing them to take actions to mitigate potential financial losses due to fraudulent use of their personal information.

Attribution: Tom Murphy
Attribution: Tom Murphy

Fast forward ten years.  California Attorney General’s specialized eCrime Unit found that increasingly “criminals are targeting Internet Web sites with inadequate security, including some social media Internet Web sites, to harvest email addresses, user names, and passwords,” and “[b]ecause most people do not use unique passwords for each of their accounts, acquiring the information on one account can give a thief access to [many different] accounts.”

And so, on September 10, the California legislature passed and sent to the Governor’s desk a bill that would amend California’s security breach notification law in a significant way.  This is the second bill in as many weeks to reach the Governor’s desk addressing consumer privacy.  Last week it was AB-370, which I discussed here.  This week, it is California Senate Bill 46 (SB-46), which would expand the definition of “personal information” subject to California’s existing security breach disclosure requirements to include “a user name or email address, in combination with a password or security question and answer that permits access to an online account.”  This could have a significant impact, given that notification requirements following a security breach incident depend upon whether the compromised data falls within the definition of “personal information”.

Overview of California’s Security Breach Notification Law

California’s Security Breach Notification Law (Section 1798.82 of the California Civil Code) requires businesses that own or license computerized data consisting of personal information to disclose any breach of the security of the system following discovery of such breach to any resident of California whose unencrypted personal information was believed to be acquired by an unauthorized person.  The triggering event is a “breach of the security of the system”, which means the unauthorized acquisition of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by the business.  Likewise, 1798.82 requires businesses that maintain (but do not own or license) computerized data consisting of personal information to notify the owner or licensee of the information of any associated security breach immediately following the discovery of such breach.

Where a data breach occurs and a business is required to issue a notification, the law requires that the notification be written in plain language, and include (1) the name and contact information of the business, (2) the types of personal information that were believed to have been the subject of a breach, (3) the estimated date, or date range, of the breach, (4) the date of the notice, (5) whether the notification was delayed as a result of a law enforcement investigation, (6) a general description of the breach incident, (7) the toll-free telephone numbers and addresses of the major credit reporting agencies if the breach exposed a social security number or a driver’s license number.  Additionally, at the discretion of the business, the notification may also include information about what the business has done to protect individuals whose information has been breached and advice on steps the individual may take to protect him/herself.

Up until what appears to be the imminent passage of SB-46, the definition of “personal information” meant an individual’s first name or first initial and last name in combination with that individual’s (1) social security number, (2) driver’s license or California ID number, (3) account number, in combination with any required security code, PIN, or password that would permit access to that individual’s financial account, (4) medical information, or (5) health insurance information, when either the name or any of the data elements (1)-(5) are not encrypted.

How SB-46 Amends Section 1798.82

SB-46, if signed by Gov. Jerry Brown, would amend 1798.82 in three notable ways.  First, and probably most significantly, SB-46 would broaden the definition of “personal information” to include “a user name or email address, in combination with a password or security question and answer that would permit access to an online account.”  Unlike the existing data elements (e.g., social security number, medical information, etc.), this new category of personal information does not need to be in combination with the individual’s name to be deemed personal information.

Second, and perhaps in an effort to mitigate the impact that will surely be felt by companies, the bill would provide a streamlined notification process for breaches concerning the new online account information category of personal information.  The streamlined notification process would allow the business to comply with notification requirements by providing the security breach notification in “electronic or other form that directs the person whose personal information has been breached promptly to change his or her password and security question or answer, as applicable, or to take other steps appropriate to protect the online account with the business and all other online accounts for which the person whose personal information has been breached uses the same user name or email address and password or security question or answer.”

Third, the bill would create a variation on the streamlined notification process for breaches concerning login credentials of an email account that is furnished by the business.   For these businesses (i.e., email service providers) the business must provide notice by the traditional method required under the current notification requirements (i.e., non-streamlined) or “by clear and conspicuous notice delivered to the resident online when the resident is connected to the online account from an Internet Protocol address or online location from which the business knows the resident customarily accesses the account.”

Certainly, with the occurrence of data breaches on the rise, and while usernames/email addresses and passwords are commonly collected by companies with an eCommerce or social network presence, the additional category of personal information introduced by SB-46 will have a compounding effect on companies’ notification obligations.  Companies, going forward, would be wise to put together a strategy to treat usernames/emails in combination with passwords (or security questions/answers) just as they would a person’s name in combination of a social security number under their existing information security policies.

The APPS Act: “Mobile Data as the Oil of the 21st Century”?

Last month, Georgia Congressman Hank Johnson introduced the bipartisan Application Privacy, Protection and Security (APPS) Act of 2013. The bill’s objective is “to provide for greater transparency in and user control over the treatment of data collected by mobile applications and to enhance the security of such data.” If enacted, the APPS Act would require mobile app developers to maintain privacy policies, obtain consent from consumers before collecting data, and securely maintain the data they collect. Here’s Rep. Johnson as he introduced the bill, calling for a common sense approach to meeting the challenges associated with data privacy on mobile devices:

Interestingly, the bill itself was a product of a crowdsourcing effort, of sorts. About a year ago, Rep. Johnson launched AppRights, an online forum for interested parties to build mobile privacy legislation at a grass roots level. According to Rep. Johnson, “the overwhelming majority of participants who helped build the legislation – more than 80 percent – confirmed that Congress should protect consumers’ privacy on mobile devices […] [and] wanted simple controls over privacy on devices, security to prevent data breaches, and notice and information about data collection on the device.” Rep. Johnson even has a Facebook page specific to this proposed legislation; however, at the time of this posting there appears to be only 45 “Likes”, which may be indicative of the actual response to his crowdsourcing initiative.

Putting the efficacy of crowdsourcing for federal law aside, the proposed APPS Act is largely consistent with the Federal Trade Commission’s (FTC) recent Staff Report and initiatives by States, such as the California Attorney General’s Recommendation Memo. If passed, here are the major impacts to mobile app developers and users:

1. Creates two important definitions:

  • De-identified Data” is a term used in the Act and defined to mean data that can be used to identify or infer information about a particular person or their mobile device. This definition is important because the Act largely governs mobile apps that collect Personal Data, which by definition does not include De-identified Data.
  • “Personal Data” (hereinafter “PD”) is used throughout the Act and the bulk of the Act’s provisions rely heavily on this term. So, wouldn’t it be nice if there were a clear and understandable definition of PD in the Act? Yes, but no such luck. Other than to say that PD does not include De-identified Data, the Act punts to the FTC to define this term by regulation.

2. Notice – Where a mobile app collects PD of a user, the app developer would first have to provide its users with a notice containing the terms by which the developer collects, uses, stores, and shares such PD of the user. While the Act would look to the FTC to prescribe the form and manner of such notices, the content of the notice would have to include:

  • The categories of PD that will be collected;
  • The categories of purposes for which the PD will be used;
  • The categories of third parties with which the PD will be shared; and,
  • A data retention policy that governs the length for which the PD will be stored and the terms and conditions applicable to storage, including a description of the rights of the user to withdraw consent and the process by which the user may exercise such rights.

3. Consent – Where a mobile app collects PD of a user, the app developer would first have to obtain the user’s consent to the notice. Interestingly, the statute uses the term “consent” as opposed to “express consent”, which may mean users could be found to impliedly consent to the app developer’s notice.

4. Post-consent Opt-out – This provision would require the mobile app developer to honor a user’s request to prohibit any further collection of PD by the developer. The Act would go an additional step and require the developer, after receiving a user’s opt-out request, to either delete that user’s PD or refrain from using or sharing that user’s PD. Interestingly, the decision to delete the PD or refrain from using/sharing the PD would be vested in the user, and not the developer.

5. Data Security – The app developer would have to implement “reasonable and appropriate measures” to protect against “unauthorized access” to any PD or De-identified Data that the app collects from the user. This provision appears to have very little bite to it, unless the FTC was to expand on it through regulation. Clearly, the APP Act wants no part of the unwelcoming waters that is cybersecurity law.

6. Enforcement – The Act would charge the FTC as the primary regulatory agency for purposes of enforcement and promulgation of regulations under section 18(a)(1)(B) of the FTC Act, which prohibits unfair or deceptive acts or practices. However, the Act provides States’ attorneys general or agencies with the right to bring a federal civil action when they believe the residents of their state are adversely affected by an app developer in violation of the Act.

7. Safe Harbor – App developers would be able to escape liability under the Act if the developer had implemented a “code of conduct for consumer data privacy.” To fit within the safe harbor provision, the code of conduct would have to be (i) developed through the NTIA’s multi-stakeholder process, and (ii) comply with the yet-to-be-promulgated FTC regulations.

That’s it in a nutshell. It should be interesting to see how this bill progresses through Committee, if at all. The good folks at Govtrack are giving it a snowball’s chance in hell of getting passed; but, hey, you never know. So, stay tuned! Better yet, “Like it” on Facebook! Rep. Johnson needs all the support he can get.

BYOD & Corporate Data: It’s Time to Formalize this Party

Employees across the US are increasingly using their own cell phones and other mobile devices for work purposes, in addition to personal or non-work purposes.  This trend has been dubbed Bring Your Own Device (“BYOD”) and according to a recent Cisco survey, 90% of Americans use their smartphones for work.  And, depending on the level of the employee, or the “distro” lists to which that employee is a member, there may be a significant risk that such a device contains vulnerable, confidential business information.

The BYOD trend is probably here to stay. Employees prefer their personal devices over unfamiliar employer-sourced devices.  According to an InfoWorld article, internal helpdesk support calls drop from an average of 4.5 per user per year to 2.5 when employees use their own devices.  Employers, at the same time, save money by not having to provide the devices or procure the underlying data and service plans.  Clearly, companies and their employees alike are capitalizing on the benefits of BYOD.  Here’s a great picture, courtesy of Logicalis, that depicts some interesting trends within the BYOD landscape:

Logicalis graphic.

But, at the same time, the risks associated with a BYOD program cannot be ignored.  One such risk is to corporate information security.  If a company does not have a strategy for managing a BYOD rollout, then corporate email, calendars, financial data, proprietary data, trade secrets, third-party data subject to non-disclosure agreements, and on and on, can all be vulnerable to loss or misappropriation.  And, as some reports would seem to indicate, this risk has largely gone unmitigated: some 40% of employees who use their personal smartphone for work purposes don’t even have a password to lock/unlock their device.

As dismal as the statistic above would seem to indicate, US IT departments are leading the way in terms of managing BYOD.  According to Ovum Research, of the 20-countries they surveyed, US employees are the most likely to have signed a BYOD policy at work.  And while that is certainly an accomplishment, the fact is that 70% of US employees using their own devices at work have not signed any such corporate policy governing BYOD.  The time has come to develop industry standards and best practices related to BYOD programs.  Luckily, there are many in the IT and IS field that are far ahead of their counterparts in legal and HR.  I’ll lean on one such expert at the IT Manager Daily.  As described in a recent post, BYOD programs call for three critical components:

  1. A software application for managing the devices connecting to the network;
  2. A written policy outlining the responsibilities of both the employer and employees; and,
  3. An agreement that employees sign, acknowledging that they have read and understand the policy.

While the enterprise mobility management software (#1 above) is absolutely indispensable to a successful BYOD implementation, I’d like to focus on a few elements that a BYOD policy should address.

First, the BYOD policy should address three related areas of federal employment law; namely, discrimination, labor standards, and labor relations.  As Keneth Vanko recently posted,

“First, the BYOD policy should ensure that the device is not used in a manner that could lead to discrimination or harassment suits. Second, the employer can’t inadvertently run afoul of the Fair Labor Standards Act. Specifically, non-exempt employees should not be permitted to use the device during non-working hours for work purposes. Third, with the National Labor Relations Board cracking down on the use of social media policies, a comprehensive BYOD should specifically provide that the policy does not preclude employees from discussing the terms of their employment, or anything else that can be described as concerted activity under the NLRA.”

Second, the BYOD policy should address security directly and specifically as it relates to the unique ways in which we all use mobile devices.  For example, a comprehensive policy should address at a minimum:

  • Company’s unilateral right to wipe a lost/stolen device of all company confidential information
  • Company’s unilateral right to wipe a device of all company confidential information upon employee’s termination or resignation
  • Company’s control of certain platform-specific mechanisms, such as:
    • Password for logging in
    • Mimimum standards for password strength
    • Disablement after repeated failed logins
    • Self-locking after idle

Third, the BYOD policy should address how provisions of the policy will be enforced.  Beyond the conventional managerial reprimands and HR-type repercussions, the policy should describe IT-type terms or limitations of use.  For example, consider including a provision that states if certain unauthorized use is made of the device or certain prohibited content is accessed, then access to corporate data (such as email or calendars) will be blocked until such time the device is returned to a conforming state.  Taking the concept one step further, consider swiping all corporate data from the device for repeated failure to comply with the terms of use contained within the policy.

Lastly, consider whether the policy addresses prohibited uses of the device independent of whether such use is related to business or personal activities or whether such use is made “on” or “off the clock”.  For example, it may be prudent to create a bright-line rule relative to certain uses such as storing or transmitting: illicit materials, proprietary information belonging to another company, material that harasses other employees, or materials related to an employee’s outside business activities.  Ultimately, such a mechanism (as many of the mechanisms described above) must be considered in light of a company’s IT constraints.  Depending on the capabilities of the chosen enterprise mobility management software, enforcing a BYOD policy may be a challenge; however, as more enterprise software companies push the BYOD envelope even further, I imagine that BYOD security and mobility management will simply be another COTS module that corporate IT teams integrate with their existing systems.

No matter how a company decides to articulate its specific BYOD policy, it must, at the end of the day, be well communicated and easy for employees to follow.  A concerted effort from multiple stakeholders representing IT, Legal, HR, Finance, Communications, and Procurement, should lead to a BYOD implementation that keeps employees happy and corporate data safe.