Category Archives: Legislation

Introducing Startups to the Crowd: The SEC’s “Regulation Crowdfunding”

As I discussed in a previous post last spring, startups and investors alike have been eagerly awaiting action by the  U.S. Securities and Exchange Commission (“SEC”) to promulgate rules to facilitate equity-based crowdfunding.  Well, alas, the SEC has proposed crowdfunding rules as mandated by the Jumpstart Our Business Startups Act (the “JOBS Act”).  The JOBS Act, enacted back in April of 2012, is intended to enable startups and small businesses to raise capital through crowdfunding. The public comment period is set for 90 days, which means that equity-based crowdfunding could become a reality in early 2014.

GPL
GPL

The text of the SEC’s notice of proposed rulemaking is an ambitious 585 pages.  Unlike the SEC, I value brevity.  So, after providing a quick refresher on crowdfunding and the Securities Act, the remainder of this post will discuss the key provisions that potential crowdfunding issuers (i.e., the startups) and investors (i.e., the crowd) may find interesting.   Specifically, I will point out a few key aspects of the long-awaited crowdsourcing rule’s requirements for exemption from the registration requirements of the Securities Act.

Background: Crowdfunding and the Securities Act

Currently, the only type of crowdfunding that is authorized in the US are those forms that do not involve the offer of a share in any financial returns or profits that the fundraiser may expect to generate from business activities financed through crowdfunding.  Examples of crowdfunding websites that have become mainstream include the likes of indiegogo and Kickstarter.  These platforms prohibit founders (the project starter) from offering to share profits with contributors (i.e., equity or security transactions) because such models would trigger the application of federal securities laws.  And, under the Securities Act, an offer and sale of securities must be registered unless an exemption is available.

However, newly created Section 4(a)(6) of the Security Act, as promulgated under the JOBS Act, provides an exemption (the “crowdfunding exemption”) from the registration requirements of Securities Act Section 5 for certain crowdfunding transactions.  With the introduction of this exemption, startups and small businesses will be able to raise capital by making relatively low dollar offerings of securities to “the crowd” without invoking the full regulatory burden that comes with issuing registered securities.  Additionally, the crowdfunding provisions create a new entity, referred to as a “funding portal”, to allow Internet-based platforms to facilitate the offer and sale of securities without having to register with the SEC as brokers.  Together these measures were intended to help small businesses raise capital while protecting investors from potential fraud.

Startups:  Limits on Amount Raised

The exemption from registration provided by Section 4(a)(6) is available to a U.S. startup (the issuer) provided that “the aggregate amount sold to all investors by the issuer, including any amount sold in reliance on the exemption provided under Section 4(a)(6) during the 12-month period preceding the date of such transaction, is not more than $1,000,000.”

In the proposed rule, the SEC clarifies that only the capital raised in reliance on the crowdfunding exemption should be counted toward the limitation.  In other words, all capital raised through other means will not be counted against the $1M sold in reliance on the crowdfunding exemption.  As the SEC stated in its notice of proposed rule:

“If an issuer sold $800,000 pursuant to the exemption provided in Regulation D during the preceding 12 months, this amount would not be aggregated in an issuer’s calculation to determine whether it had reached the maximum amount for purposes of Section 4(a)(6).”

Startups: Limits on the Method of Crowdfunding

Under Section 4(a)(6)(C), an offering seeking the crowdfunding exemption must be “conducted through a broker or funding portal that complies with the requirements of Section 4A(a).”  This means that crowdfunding can only occur through an intermediary, and that intermediary must meet the requirements of either (1) a broker, or (2) a funding portal. The SEC proposed two related limitations here:

1)      Single intermediary – Prohibits an issuer from using more than one intermediary to conduct an offering or concurrent offerings made in reliance on the crowdfunding exemption.  For example, you couldn’t use both FundMyStartUp.com and CrowdfundMyDreams.com for the same offering or even for different offerings when conducted concurrently.

2)      Online-only requirement – Requires that an intermediary (i.e., the broker or funding portal) effect crowdfunding transactions exclusively through an intermediary’s platform. The term “platform” means “an Internet website or other similar electronic medium through which a registered broker or a registered funding portal acts as an intermediary in a transaction involving the offer or sale of securities in reliance on Section 4(a)(6).”

According to the SEC’s notice, with respect to the online-only requirement:

“We believe that an online-only requirement enables the public to access offering information and share information publicly in a way that will allow members of the crowd to decide whether or not to participate in the offering and fund the business or idea.  The proposed rules would accommodate other electronic media that currently exist or may develop in the future. For instance, applications for mobile communication devices, such as cell phones or smart phones, could be used to display offerings and to permit investors to make investment commitments.”

A Quick Note about Funding Portals

As mentioned above, to fit within the crowdfunding exemption, the offering must be conducted through a broker or funding portal that complies with the requirements of Securities Act Section 4A(a).

Exchange Act Section 3(a)(80) (added by Section 304 of the JOBS Act), defines the term “funding portal” as any person acting as an intermediary in a transaction involving the offer or sale of securities for the account of others, solely pursuant to Securities Act Section 4(a)(6), that does not: (1) offer investment advice or recommendations; (2) solicit purchases, sales or offers to buy the securities offered or displayed on its platform or portal; (3) compensate employees, agents or other person for such solicitation or based on the sale of securities displayed or referenced on its platform or portal; (4) hold, manage, possess or otherwise handle investor funds or securities; or (5) engage in such other activities as the Commission, by rule, determines appropriate.”

Under the SEC’s proposed rules, the definition of “funding portal” is exactly the same as the statutory definition, except the word “broker” is substituted for the word “person”.  The SEC is making clear that funding portals are brokers (albeit a subset of brokers) under the federal securities laws.

Investors: Limits on Amount Invested

Under Section 4(a)(6)(B), the aggregate amount sold to any investor by an issuer, including any amount sold in reliance on the exemption during the 12-month period preceding the date of such transaction, cannot exceed: “(i) the greater of $2,000 or 5 percent of the annual income or net worth of such investor, as applicable, if either the annual income or the net worth of the investor is less than $100,000; and (ii) 10 percent of the annual income or net worth of such investor, as applicable, not to exceed a maximum aggregate amount sold of $100,000, if either the annual income or net worth of the investor is equal to or more than $100,000.”

Because the statutory definition above creates some potential ambiguity, the SEC’s rule seeks to clarify the relationship between annual income and net worth for purposes of determining the applicable investor limitation.  Essentially, the proposed rules take a “whichever is greater” method for measuring whether limitation (i) or (ii) applies.  As the rule proposes,

  • Where both annual income and net worth are less than $100,000, then the limitation will be set at the greater of (a) $2,000 or (b) the greater of (x) $5% of annual income or (y) 5% of net worth.
  • Where either annual income or net worth exceeds $100,000, then the limitation will be set at the greater of (a) 10% of annual income or (b) net worth; provided, however, in either case (a) or (b) may not exceed $100,000.

Related to investor limits, but more important for startups to understand, the proposed rules alleviate burdens associated with vetting investor suitability.  Specifically, the rule allows startups to reasonably rely on the efforts that the intermediary takes in order to determine that the amount purchased by an investor will not cause the investor to exceed investor limits.

Imminent Expansion of the Security Breach Notification Law

Back in 2003, California became the first state in the U.S. to pass a security breach notification law. California’s Security Breach Notification Law applies to any business that conducts business in California, which of course means that the law reaches nearly all companies that have an e-commerce presence.  In a nut shell, the statute requires businesses to notify California residents when the security of such residents’ personal information has been breached.  The rationale behind the law is that breach notification ensures that residents become aware of a breach, thereby allowing them to take actions to mitigate potential financial losses due to fraudulent use of their personal information.

Attribution: Tom Murphy
Attribution: Tom Murphy

Fast forward ten years.  California Attorney General’s specialized eCrime Unit found that increasingly “criminals are targeting Internet Web sites with inadequate security, including some social media Internet Web sites, to harvest email addresses, user names, and passwords,” and “[b]ecause most people do not use unique passwords for each of their accounts, acquiring the information on one account can give a thief access to [many different] accounts.”

And so, on September 10, the California legislature passed and sent to the Governor’s desk a bill that would amend California’s security breach notification law in a significant way.  This is the second bill in as many weeks to reach the Governor’s desk addressing consumer privacy.  Last week it was AB-370, which I discussed here.  This week, it is California Senate Bill 46 (SB-46), which would expand the definition of “personal information” subject to California’s existing security breach disclosure requirements to include “a user name or email address, in combination with a password or security question and answer that permits access to an online account.”  This could have a significant impact, given that notification requirements following a security breach incident depend upon whether the compromised data falls within the definition of “personal information”.

Overview of California’s Security Breach Notification Law

California’s Security Breach Notification Law (Section 1798.82 of the California Civil Code) requires businesses that own or license computerized data consisting of personal information to disclose any breach of the security of the system following discovery of such breach to any resident of California whose unencrypted personal information was believed to be acquired by an unauthorized person.  The triggering event is a “breach of the security of the system”, which means the unauthorized acquisition of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by the business.  Likewise, 1798.82 requires businesses that maintain (but do not own or license) computerized data consisting of personal information to notify the owner or licensee of the information of any associated security breach immediately following the discovery of such breach.

Where a data breach occurs and a business is required to issue a notification, the law requires that the notification be written in plain language, and include (1) the name and contact information of the business, (2) the types of personal information that were believed to have been the subject of a breach, (3) the estimated date, or date range, of the breach, (4) the date of the notice, (5) whether the notification was delayed as a result of a law enforcement investigation, (6) a general description of the breach incident, (7) the toll-free telephone numbers and addresses of the major credit reporting agencies if the breach exposed a social security number or a driver’s license number.  Additionally, at the discretion of the business, the notification may also include information about what the business has done to protect individuals whose information has been breached and advice on steps the individual may take to protect him/herself.

Up until what appears to be the imminent passage of SB-46, the definition of “personal information” meant an individual’s first name or first initial and last name in combination with that individual’s (1) social security number, (2) driver’s license or California ID number, (3) account number, in combination with any required security code, PIN, or password that would permit access to that individual’s financial account, (4) medical information, or (5) health insurance information, when either the name or any of the data elements (1)-(5) are not encrypted.

How SB-46 Amends Section 1798.82

SB-46, if signed by Gov. Jerry Brown, would amend 1798.82 in three notable ways.  First, and probably most significantly, SB-46 would broaden the definition of “personal information” to include “a user name or email address, in combination with a password or security question and answer that would permit access to an online account.”  Unlike the existing data elements (e.g., social security number, medical information, etc.), this new category of personal information does not need to be in combination with the individual’s name to be deemed personal information.

Second, and perhaps in an effort to mitigate the impact that will surely be felt by companies, the bill would provide a streamlined notification process for breaches concerning the new online account information category of personal information.  The streamlined notification process would allow the business to comply with notification requirements by providing the security breach notification in “electronic or other form that directs the person whose personal information has been breached promptly to change his or her password and security question or answer, as applicable, or to take other steps appropriate to protect the online account with the business and all other online accounts for which the person whose personal information has been breached uses the same user name or email address and password or security question or answer.”

Third, the bill would create a variation on the streamlined notification process for breaches concerning login credentials of an email account that is furnished by the business.   For these businesses (i.e., email service providers) the business must provide notice by the traditional method required under the current notification requirements (i.e., non-streamlined) or “by clear and conspicuous notice delivered to the resident online when the resident is connected to the online account from an Internet Protocol address or online location from which the business knows the resident customarily accesses the account.”

Certainly, with the occurrence of data breaches on the rise, and while usernames/email addresses and passwords are commonly collected by companies with an eCommerce or social network presence, the additional category of personal information introduced by SB-46 will have a compounding effect on companies’ notification obligations.  Companies, going forward, would be wise to put together a strategy to treat usernames/emails in combination with passwords (or security questions/answers) just as they would a person’s name in combination of a social security number under their existing information security policies.

Do Not Track: How Impending California Law Will Affect All Commercial Web Sites

Do Not Track has found a home in California.  As of September 3rd, California Assembly Bill No. 370 (“AB-370”) sits upon Governor Jerry Brown’s desk awaiting his signature.  Once signed, this bill amends the California Online Privacy Protection Act (“CalOPPA” at Section 22575 of the California Business and Professions Code) and will require commercial website operators that collect personally identifiable information (“PII”) through the Internet to disclose how it responds to Do Not Track (“DNT”) signals.  Most mainstream web browsers have functionality that allows the user to signal her desire to not be tracked.  However, under current federal and state law, websites are not legally required to honor that signal.  While AB-370 does not make honoring a DNT signal a legal requirement, it does aim to inform consumers as to which websites have a practice in place to honor DNT signals.

Attribution:  Electronic Frontier Foundation
Attribution: Electronic Frontier Foundation

Background on the Existing CalOPPA Statute

In 2003, the California Legislature passed CalOPPA.  The law requires operators of “web sites and online services” that collect users’ PII to conspicuously post its privacy policy on its site and comply with the posted policy.  CalOPPA currently requires privacy policies to identify the categories of PII collected, the categories of third-parties with whom that PII may be shared, the process for consumers to review and request changes to his or her PII, the process for notifying users of material changes to the privacy policy, and the effective date of the privacy policy.  An operator has 30 days to comply after receiving notice of noncompliance with the privacy policy posting requirement. Failure to comply with the CalOPPA requirements may result in penalties of up to $2,500 for each violation.

It is important to note, CalOPPA has broad reach.  Virtually all commercial websites fall within its scope for two reasons.  First, it is hard to imagine any commercial website not collecting PII, which (for the purposes of CalOPPA) is defined under Section 22577 as “individually identifiable information about an individual consumer collected online by the operator from that individual and maintained by the operator in an accessible form, including […] (1) a first and last name, (2) a home or other physical address, including street name and name of a city or town, (3) an e-mail address, (4) a telephone number, (5) a social security number, (6) any other identifier that permits the physical or online contacting of a specific individual, or (7) information concerning a user that the Web site or online service collects online from the user and maintains in personally identifiable form in combination with an identifier described in this subdivision.”  Second, even though this is a California law, it applies to any website that collects PII from consumers residing in California.  As such, CalOPPA (including the AB-370 revisions) has a de facto nationwide reach.

The Need to Amend CalOPPA (via AB-370)

The impetus for introducing AB-370 is the growing concern over online tracking, which is also referred to as online behavioral targeting.  According to the good folks at Wikipedia,

“When a consumer visits a web site, the pages they visit, the amount of time they view each page, the links they click on, the searches they make and the things that they interact with, allow sites to collect that data, and other factors, create a ‘profile’ that links to that visitor’s web browser. As a result, site publishers can use this data to create defined audience segments based upon visitors that have similar profiles. When visitors return to a specific site or a network of sites using the same web browser, those profiles can be used to allow advertisers to position their online ads in front of those visitors who exhibit a greater level of interest and intent for the products and services being offered. On the theory that properly targeted ads will fetch more consumer interest, the publisher (or seller) can charge a premium for these ads over random advertising or ads based on the context of a site.”

And, by many accounts, the practice of online behavioral targeting is on the rise. Last year, the Wall Street Journal featured an article describing user-tailored advertising and the explosive demand for web-browser collected consumer data.  One practice is online auctions of consumer web browser data. The article notes that “[d]espite rising privacy concerns, the online industry’s data-collection efforts have expanded in the past few years. One reason is the popularity of online auctions, where advertisers buy data about users’ Web browsing. Krux [which sells a service for website publishers to protect their customer data] estimated that such auctions, known as real-time bidding exchanges, contribute to 40% of online data collection.”  The article tells of one study, where the average visit to a webpage triggered 56 instances of data collection.

And, so, here we have AB-370 to the rescue.  According to the bill’s author, Assemblyman Al Muratsuchi, AB-370 “would increase consumer awareness of the practice of online tracking by websites and online services, […] [which] will allow the consumer to make an informed decision about their use of the website or service.”

CalOPPA After AB-370

In addition to the requirements under the existing law outlined above, the amended CalOPPA will:

1)      Require an operator’s privacy policies to disclose how it responds to web browser DNT signals or “other mechanisms that provide consumers the ability to exercise choice regarding the collection of PII about an individual consumer’s online activities over time and across third-party Web sites or online services”; provided the operator engages in PII data collection;

2)      Require an operator’s privacy policies to disclose whether third parties may collect PII about an individual consumer’s online activities over time and across different Web sites when a consumer uses the operator’s site; and

3)      Permit an operator to satisfy the response disclosure requirement for DNT signals by providing a clear and conspicuous hyperlink in the privacy policy to an online location containing a description, including the effects, of any program or protocol the operator follows that offers the consumer that choice.

For all the non-techies out there, it may be useful to quickly explain how Do Not Track technology works.  It is actually relatively simple.  In practice, a consumer wishing to communicate a DNT signal to sites she is visiting would generally do so via her web browser controls.  By changing the setting in her browser properties, the browser enables the HTTP header field (known as the “DNT Header”) that requests that a web application disable its tracking of an individual user.  The header field name is DNT and it accepts three values: “1” in case the user does not want to be tracked (opt out), “0” in case the user consents to being tracked (opt in), or “null” (no header sent) if the user has not expressed a preference. The default behavior required by the standard is not to send the header (i.e., null value), until the user chooses to enable the setting via their browser.

Implications of AB-370

Before going into the implications of the bill, it should be made clear what AB-370 is not.  One thing that the text of the bill and supporting commentary make clear is that AB-370 is not a Do Not Track law.  Back in March 2012, the FTC finalized the “Protecting Consumer Privacy in an Era of Rapid Change” report, in which the FTC endorsed the implementation of a Do Not Track system.  The report is not a regulation and, as such, there remains (even after AB-370 is signed into law) no legal requirement for sites to honor the headers.

In contrast, AB-370 is a disclosure law.  Its aim is to promote transparency.  The logic goes something like this:  If a privacy policy discloses how an operator handles a Do Not Track signal from a browser, then individual consumers will make informed decisions about their use of the site or the service.  As the California Attorney General’s Office put it, “AB-370 is a transparency proposal, not a Do Not Track proposal. When a privacy policy discloses whether or not an operator honors a Do Not Track signal from a browser, individuals may make informed decisions about their use of the site or service.”

What Remains to Be Seen Through AB-370 Transparency

While on the surface of the bill, the disclosure requirement might seem simple.  However, the next logical question is “but, how exactly?”  Despite the best efforts of industry consortiums, such as the World Wide Web Consortium (W3C), there is still no clear consensus on how to handle DNT signals.  Even less clear is how best to handle DNT signals in the face of third-party tracking on the operator’s site.  So, by extension, how best to disclose the operator’s handling of DNT signals is likewise unclear.  Until an industry practice becomes standardized, the best way forward has to be for the operator of the site to simply (but, extremely accurately) state how it responds to the DNT Header.  By way of example, this could perhaps be achieved by adding the following sentence to the operator’s privacy policy:

  • If Operator Doesn’t Recognize Do Not Track Signals: “This Site does not receive or respond to the DNT Header”
  • If Operator Does Recognize Do Not Track Signals: “This Site receives the DNT Header and responds to a DNT:1 value by … {fill in the blank with how data collection by the operator and/or its third-parties is impacted}“

Lastly, even though AB-370 is a disclosure law and not a legal requirement to honor DNT signals, the practical effect could leave little distinction.  The Consumer Watchdog predicts, albeit somewhat cautiously, that “requiring transparency could well prompt companies to compete based on their privacy practices [and] will likely prompt more companies to honor Do Not Track requests […]”.  How website operators react to the full transparency impact of AB-370 will be interesting to see! (Pun entirely intended)

The APPS Act: “Mobile Data as the Oil of the 21st Century”?

Last month, Georgia Congressman Hank Johnson introduced the bipartisan Application Privacy, Protection and Security (APPS) Act of 2013. The bill’s objective is “to provide for greater transparency in and user control over the treatment of data collected by mobile applications and to enhance the security of such data.” If enacted, the APPS Act would require mobile app developers to maintain privacy policies, obtain consent from consumers before collecting data, and securely maintain the data they collect. Here’s Rep. Johnson as he introduced the bill, calling for a common sense approach to meeting the challenges associated with data privacy on mobile devices:

Interestingly, the bill itself was a product of a crowdsourcing effort, of sorts. About a year ago, Rep. Johnson launched AppRights, an online forum for interested parties to build mobile privacy legislation at a grass roots level. According to Rep. Johnson, “the overwhelming majority of participants who helped build the legislation – more than 80 percent – confirmed that Congress should protect consumers’ privacy on mobile devices […] [and] wanted simple controls over privacy on devices, security to prevent data breaches, and notice and information about data collection on the device.” Rep. Johnson even has a Facebook page specific to this proposed legislation; however, at the time of this posting there appears to be only 45 “Likes”, which may be indicative of the actual response to his crowdsourcing initiative.

Putting the efficacy of crowdsourcing for federal law aside, the proposed APPS Act is largely consistent with the Federal Trade Commission’s (FTC) recent Staff Report and initiatives by States, such as the California Attorney General’s Recommendation Memo. If passed, here are the major impacts to mobile app developers and users:

1. Creates two important definitions:

  • De-identified Data” is a term used in the Act and defined to mean data that can be used to identify or infer information about a particular person or their mobile device. This definition is important because the Act largely governs mobile apps that collect Personal Data, which by definition does not include De-identified Data.
  • “Personal Data” (hereinafter “PD”) is used throughout the Act and the bulk of the Act’s provisions rely heavily on this term. So, wouldn’t it be nice if there were a clear and understandable definition of PD in the Act? Yes, but no such luck. Other than to say that PD does not include De-identified Data, the Act punts to the FTC to define this term by regulation.

2. Notice – Where a mobile app collects PD of a user, the app developer would first have to provide its users with a notice containing the terms by which the developer collects, uses, stores, and shares such PD of the user. While the Act would look to the FTC to prescribe the form and manner of such notices, the content of the notice would have to include:

  • The categories of PD that will be collected;
  • The categories of purposes for which the PD will be used;
  • The categories of third parties with which the PD will be shared; and,
  • A data retention policy that governs the length for which the PD will be stored and the terms and conditions applicable to storage, including a description of the rights of the user to withdraw consent and the process by which the user may exercise such rights.

3. Consent – Where a mobile app collects PD of a user, the app developer would first have to obtain the user’s consent to the notice. Interestingly, the statute uses the term “consent” as opposed to “express consent”, which may mean users could be found to impliedly consent to the app developer’s notice.

4. Post-consent Opt-out – This provision would require the mobile app developer to honor a user’s request to prohibit any further collection of PD by the developer. The Act would go an additional step and require the developer, after receiving a user’s opt-out request, to either delete that user’s PD or refrain from using or sharing that user’s PD. Interestingly, the decision to delete the PD or refrain from using/sharing the PD would be vested in the user, and not the developer.

5. Data Security – The app developer would have to implement “reasonable and appropriate measures” to protect against “unauthorized access” to any PD or De-identified Data that the app collects from the user. This provision appears to have very little bite to it, unless the FTC was to expand on it through regulation. Clearly, the APP Act wants no part of the unwelcoming waters that is cybersecurity law.

6. Enforcement – The Act would charge the FTC as the primary regulatory agency for purposes of enforcement and promulgation of regulations under section 18(a)(1)(B) of the FTC Act, which prohibits unfair or deceptive acts or practices. However, the Act provides States’ attorneys general or agencies with the right to bring a federal civil action when they believe the residents of their state are adversely affected by an app developer in violation of the Act.

7. Safe Harbor – App developers would be able to escape liability under the Act if the developer had implemented a “code of conduct for consumer data privacy.” To fit within the safe harbor provision, the code of conduct would have to be (i) developed through the NTIA’s multi-stakeholder process, and (ii) comply with the yet-to-be-promulgated FTC regulations.

That’s it in a nutshell. It should be interesting to see how this bill progresses through Committee, if at all. The good folks at Govtrack are giving it a snowball’s chance in hell of getting passed; but, hey, you never know. So, stay tuned! Better yet, “Like it” on Facebook! Rep. Johnson needs all the support he can get.