Ko IP & AI Law PLLC

Patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement AI into your processes, and mitigate against the risks.


Your duties to your AI customers and their private data

Any discussion of “responsible AI” and your use of AI customer data simply cannot start with the pretense: “It’s currently not illegal, so we’re good!” As noted in our discussion of deepfake pornography last week, the law simply hasn’t caught up with the technology. Perhaps the same can be said for data privacy issues. But it’s also harder to catch up here in the U.S. when we seem to be moving sideways at best. 

There is no comprehensive federal data privacy law in the U.S.

In the U.S., there is no constitutional right to data privacy. Nor is there a comprehensive federal law on data privacy.

In Europe, the rights to privacy and data protection are both considered fundamental rights.1 Under Europe’s comprehensive privacy law, General Data Protection Regulation (GDPR), companies have to ask individuals for permission to share data and individuals have the right to access, delete, or control the use of that data.2

In the U.S., the courts determine an individual’s data privacy rights and an organization’s ability to use her personal information on a case-by-case basis. The courts factor in a hodge-podge of:

Key responsibilities for customer AI data custodians: transparency and security

As such, U.S. companies can by default sell their customer’s private information unless a law or court specifies otherwise. This is common knowledge. But most simply do not have a full understanding of the implications and how this can harm them. “What’s currently unclear for many consumers is the complex and indirect ways that companies go about monetizing through tracking, bundling, and profiling their personal information and behavior in order to further influence parents’, kids’, and consumers’ behavior.”4 

It would be both irresponsible and, in most cases, bad business for U.S. customers to sell their customer’s private information. Certainly not without consent.

Your key responsibilities as custodian of your AI customer data are to be transparent and take “reasonable measures” to secure it.

Note to AI implementers: this is not like internet paywall bypassing and deepfake pornography (discussed in Part 1 and 2 of this series). The duty to your AI customers and their data is primarily if not entirely yours (not your AI provider’s).


I. Principle No. 1: Don’t be scum, cont’d

A. Rule 1(c): Be transparent about what you will do with your AI customer data

If your AI implementer or end user customers are good with you selling or sharing the data they provide to you to a third party, then:

1. Facebook: Do as I say, don’t do as I do

Not surprisingly, you shouldn’t say you will keep your customer’s data secure and then renege and sell. Nor should you give assurances regarding your security measures, and then not live up to them.

Facebook did all of the above in multiple ways, in specific violation of an earlier 2012 settlement it had made with the Federal Trade Commission.5 Facebook “repeatedly misrepresented the extent to which users could control the privacy of their data.”

In 2019, Facebook agreed to implement a new privacy structure and to additional FTC monitoring to settle. Facebook further agreed to a record $5 billion dollar penalty.

AI businesses should determine whether you plan to use any of your AI implementer or end-user customer’s data to train your AI models. Just like whether you plan to sell or share any of your customer’s data. Both purposes are unrelated to the delivery of your products or services.

It’s fine if you do. Just be transparent about it, most obviously by complying with any disclosure obligations.

2. It’s not “transparent” if you need a lawyer to understand the basics

a. How not in control of my data am I, exactly…?

There are three key questions that consumers should care about regarding the data they provide to an AI provider or implementer:

For my part, I’d prefer to know the answers (AI providers are simply not forthcoming re no. 1 in particular) and better yet have the ability to opt-out of 1 and 2.

But I would generally consent regardless so long as the data:

b. When “legal disclosure” and practical reality collide…

I don’t, however, have time to pore over a company’s privacy policy to figure this all out before every purchase. Nor do I have the time to figure out all the ways an app will unnecessarily access the other information in my phone. And if an IP lawyer like myself can’t manage this, it’s safe to say that 99.9% of other people do not either. For most, even if they had the time, they wouldn’t have the ability to sufficiently comprehend the ever finer print.

Some companies do better than others here, in particular with the “acknowledgment” boxes that must be checked before proceeding. Most, however, fail miserably.

Ko IP & AI Law PLLC logo
Jim W. Ko headshot
Cybersecurity and your AI customer data

There is no reason that companies couldn’t provide this basic information in plain English if they were either required or inclined to do so. All companies based or doing business in Europe are doing it, as required by the GDPR. All larger companies based or doing business in California are already doing it, as required by the CCPA.6

California Consumer Privacy Act and your AI customer data

B. Rule 1(d): Take “reasonable measures” to secure your AI customer data

What would you do if a regulator like the Federal Trade Commission starts an enforcement action against you for a data breach of your AI inputs or outputs? Or if your customer files a private claim against you for the same?

Your primary defense will be to establish that your security efforts were “reasonable.” The level of security required depends on several factors, including:

Guidance on this issue provided by various regulators is general, not specific. And it is not legally binding in any event.

Courts look to industry customs to inform a reasonable security measures analysis. And “in some instances, legislatures and regulatory agencies have already identified particular security measures or ‘controls’ to be worth the cost of implementation and have required them.”7

Some of the privacy measures the FTC imposed on Facebook should apply to all AI companies of all sizes, including:


C. Rule 1(e): Fess up if you have to.

There will be federal and state laws and regulations to come mandating what should do in response to a data breach impacting your AI customer’s data.

Until then, just be aware that each state has its own data breach response requirements. There are notification requirements that vary state-to-state to: 1.) regulators; 2.) credit/consumer reporting agencies; and 3.) impacted individuals.9

One universal takeaway is that you should keep your customer’s sensitive information encrypted. Virtually all states exempt you from any notification requirements in the event of a data breach when you do.10

D. How current AI provider customer policies address use of customer data

1. For businesses

The market is being set for data privacy of business proprietary data as we speak.

a. AI provider enterprise service offerings from the big guns

AI providers, in particular the larger ones, will often agree to not share, sell, or use their enterprise customer’s data provided for unrelated purposes.

The reason is simple. All businesses need to protect their confidential, proprietary information. This force is significant enough to even trump AI providers’ insatiable need for data to train their models. AI providers simply would not get business from the corporate world without such protections.

Both Google Workspace and Microsoft Copilot have clear policy statements or plans to isolate and protect their customer’s data from use in the training of their AI.11 Similarly OpenAI “do[es] not use content submitted by customers to [its] business offerings such as [its] API and ChatGPT Enterprise to improve model performance.”12

Some AI providers, however, exclude commercial customer data from model training purposes only if they opt out.13

b. AI provider or implementer service offerings for businesses

The non-behemoth AI provider and implementers, however, take a wide variety of approaches on this front. Their AI service terms and conditions and privacy policies tend to be the same for their business customers as those for consumer customers, as presented below.

2. For consumers

a. Will you sell or share my data to third parties or use to train your AI models?

A minority of AI providers affirmatively state they will not sell or share their customer data without consent.

Many do sell or share their customer, but few are particularly transparent about this. E.g., “This Policy places no limitations on our use or sharing of Aggregate/De-Identified Information.”14 Translation: “Yeah, we’re totally selling your data, but by disclosing this, you can’t sue us for lying about it even though we’re kinda hiding it.”

b. Will you de-identify my data?

The best practice for AI providers is to “further take steps to reduce the amount of personal information in [] training datasets before they are used to improve [the AI provider’s] models.”15 This is in my estimation where the rubber will hit the road in terms of any government regulatory efforts to address the problem of AI and personal data privacy.

The Biden executive order on AI references the need to develop and implement “privacy-enhancing technology” (PETs), but only references imposing them on data collected and stored by federal agencies for this effort. But if mitigating against the inexorable spread of personal data is the goal, it is the AI providers and implementers that must implement such PETs into their own generative AI processes.

In the absence of laws and regulations and standards setting for PETs, there will be no accountability on this issue. Just more aspirational statements, along the lines of: “We’re going to try our best to de-identify your information when we sell it to a third party for reasons unrelated to our services to you. But if we fail to do so and you become a victim of identify theft because of it, then oh well.”

c. Opt out or opt in?

AI providers provide their users the ability to opt-out of having their prompts and output used to train the AI. This aligns with the general practice requiring users to opt-out of having their data sold to third parties.

Some advocate for these defaults to be flipped.

The key distinction for me remains whether or not the data is “de-identified” from my name. But so-called “reasonable efforts” to de-identify are not sufficient here.

Actually de-identified should be the standard on this specific issue. The AI provider is running in effect a side-hustle off of my data. The AI provider has control over how it collects and processes its data. And the AI provider has the (AI) technological means to screen it. The standard should be higher.

If my personal data is actually de-identified at a minimum as of the time that it is sold, shared, or used, then I would generally have no issue with an opt-out regime. Examples of AI providers that specify that they will use or share only aggregate/de-identified information include Scribe16 and Anthropic.17 But most build in considerable wiggle room in their verbiage.

But if not, then responsible AI would require that it should be opt-in. People should have to affirmatively provide clear, informed consent for their data to be sold or shared with third-parties for unrelated purposes, especially if there is any risk that sensitive information will be connected back to them.

*Note: Nothing in this blog constitutes legal advice or the formation of any attorney-client relationship. See Disclaimers.


II. Conclusion

It will be interesting to see how the politics on these data privacy issues develop in our new AI age.

AI is the poison. It greatly heightens the data privacy risks involved. Any private information that is posted publicly even for an instant may never become private again. AI will find it.

AI is also the closest approximation to a cure that we have. In principle, AI can be used to find all personal identifying information and remove it (at least in a given data repository at a given moment in time). The devil as always will be in the details.

Figuring out what other AI providers are doing and what applicable federal and state laws are should be your starting point for mitigating against AI data security risks. Figuring out where the law is going should be your target.

Find experienced counsel who actively tracks all of this. Someone who can help you develop and implement comprehensive AI and cybersecurity policies now. This will save you a lot of heartache down the road.

III. Where we’ve come from and where are we going?

We’ve completed our journey through the responsible AI no-brainers over the past three weeks. We’ve covered internet paywalls, deepfake pornography, and customer data privacy policies.

In the coming weeks, we’ll be tackling the tougher-to-call topics of:



  1. European Convention of Human Rights, art. 8 (Right to respect for private and family life); European Charter of Fundamental Rights, arts. 7 & 8 (Protection of Personal Data. “1. Everyone has the right to the protection of personal data concerning him or her. 2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified. 3. Compliance with these rules shall be subject to control by an independent authority.”). ↩︎
  2. General Data Protection Regulation (GDPR), art. 5 (Principles relating to processing of personal data), available here. ↩︎
  3. The state of residence is particularly important if it is California, which has developed its own comprehensive data privacy law similar to Europe’s GDPR. The California Consumer Privacy Act (CCPA) provides strong consumer-privacy friendly mechanisms such as: 1.) a “global opt out”, by which residents can set their internet browsers to automatically notify every website that a user wishes to opt out of the sale of their personal data or use of it for targeted advertising (six other state’s privacy laws require this too, but only California put some more teeth behind it–see Samuel Adams and Stacey Gray, Survey of Current Universal Output Mechanisms, Future of Privacy Forum (Oct 12, 2023), available here; and 2.) a private right of action by which individuals can sue companies directly for any violations (California stands alone here–see Comparing US state-level data privacy laws, usercentrics, available here). Most states that do not have a comprehensive data privacy law in place or in the works, the majority of those that do are not so consumer-privacy friendly, including Virginia. For a website tracking U.S. state privacy legislation, see The International Association of Privacy Professions (IAPP) here. For some critical commentary, see Todd Feathers, Big Tech is Pushing States to Pass Privacy Laws, and Yes, You Should Be Suspicious, The Markup, April 15, 2021, available here.

    The private right of action is a particularly hot-button issue for the development of any federal or state comprehensive data privacy statute. The consumer-privacy side of the argument is that in the absence of such a right to sue, individuals have to entirely rely on federal law enforcement or state attorney generals (if there is an applicable federal or state law) or federal or state regulators (again, if applicable) to protect their privacy interests and that is more miss than hit. The business side of the argument is that private rights of actions will be abused by individuals and their attorneys, imposing disproportionate and potentially debilitating litigation costs on businesses.

    Some of the existing applicable federal laws specifically provide for private rights of action, most notably including The Fair Credit Reporting Act, permitting consumers to recover actual damages from “any person who is negligent in failing to comply with a [credit reporting] requirement” and potentially willful damages for willful violations. 15 U.S.C. § 1681o-n (1996), as amended by the Fair and Accurate Credit Transaction Act in 2003. See also Telephone Consumer Protection Act, 47 U.S.C. §§ 227 (1991) (including private right of actions for “actual monetary loss or $500 per telemarketing violation, whichever is greater,” and up to treble damages for willful violations). ↩︎
  4. Jeff G., A Majority of Apps Are About to Come Clean and Say They’ve Been Selling Your Data All Along, common sense education (Mar. 29, 2022), available here. For a comprehensive study, see 2021 State of Kids’ Privacy, common sense education (2021), available here. ↩︎
  5. To be precise, Facebook made customer data available to other parties either in exchange for more data or as payment to, e.g., Facebook app developers. See, e.g., Alexis C. Madrigal, Facebook Didn’t Sell Your Data; It Gave It Away, The Atlantic (Dec. 19, 2018), available here. You can judge for yourself whether Zuckerberg had his fingers crossed when he claimed in his infamous Wall Street Journal op-ed in response to the Cambridge Analytica scandal, “We don’t sell people’s data, even though it’s often reported that we do.”

    In the Cambridge Analytica scandal itself, the British consulting firm paid for data harvested by a third-party who developed an app using an Application Programming Interface that Facebook made available. Through this Facebook API, the data of 87 million Facebook users was accessed, including public and private information. For a discussion of the inadequate security measures Facebook put in place for this API, see Ronnie Mitra, How the facebook API led to the Cambridge Analytica Fiasco, APIacademy (June 15, 2018), available here. ↩︎
  6. The CCPA applies to for-profit businesses that do business in California and meet any of the following:
    — Have a gross annual revenue of over $25 million;
    — Buy, sell, or share the personal information of 100,000 or more California residents, households, or devices; or
    — Derive 50% or more of their annual revenue from selling California residents’ personal information. ↩︎
  7. The Sedona Conference, Commentary on a Reasonable Security Test, 22 Sedona Conf. J. 345, 358 (2021). ↩︎
  8. See Andrew Morse and Queenie Wong, Facebook-FTC settlement: What you need to know about the $5 billion deal, CNET, available here. ↩︎
  9. For a summary of this information and how to develop a data breach incident response plan, see The Sedona Conference, Incident Response Guide, 21 Sedona Conf. J. 125 (2020). ↩︎
  10. Id. at 182-83. ↩︎
  11. See How we’re protecting your Google Workspace data in the era of generative AI, Google (“Your data is your data,” “Your data stays in Workspace,” “Your content is not used for ads targeting,” “Your interactions with Duet AI stay within your organization”, “Your content is not used for any other customers,” etc.), available here. See Our vision to bring Microsoft Copilot to everyone, and more, Microsoft Bing Blogs (Nov. 15, 2023) (“With Copilot’s commercial data protection, prompts and responses are not saved, Microsoft has no eyes-on access to it, and it’s not used to train the underlying models.”), available here; see also Data, Privacy, and Security for Microsoft Copilot for Microsoft 365, Microsoft (Dec. 5, 2023), available here. Naturally this only applies to your data that you keep within the Google or Microsoft platforms and to which you apply the required security settings. ↩︎
  12. See Michael Schade, Data usage for consumer services FAQ, OpenAI, available here. ↩︎
  13. See, e.g., Cohere Data Usage Policy, Cohere (last update: Oct. 30, 2023), available here. ↩︎
  14. See Privacy Policy, Colony Labs (d/b/a Scribe), available here. ↩︎
  15. See Michael Schade, How your data is used to improve model performance, OpenAI, available here. See also Cohere Data Usage Policy, supra note 12 (“API data undergoes a sanitization process before storage. Before being fed into any training models, our team removes common sources of personal information.”). ↩︎
  16. See Privacy Policy, Colony Labs (d/b/a Scribe) (“This Policy places no limitations on our use or sharing of Aggregate/De-Identified Information.”), available here ↩︎
  17. See Privacy Policy, Anthropic (version 3.0, effective July 8, 2023) (“We use your personal data for the following purposes … To de-identify it and train our AI models”), available here. ↩︎