April 15, 2020 |
These days, most companies are focused on the myriad of legal, health and safety, and financial issues caused by COVID-19. While the firm is actively monitoring these issues,1 we also want to keep you abreast of other developments that may be relevant to your business. Here, we provide an overview of guidance recently offered by Andrew Smith, the director of the Federal Trade Commission’s (FTC) Bureau of Consumer Protection, on how to manage consumer protection risks associated with artificial intelligence (AI) and algorithms.2 Although recognizing the significant potential of AI, Smith explained that, to avoid consumer protection problems, companies should ensure that their use of AI tools is transparent, explainable, fair, and empirically sound, while fostering accountability.
More detail on Smith’s guidance on each of these points is included below.
• Smith noted that companies should be careful not to mislead when using AI tools, such as chatbots, to interact with consumers. He explained that this is an area where the FTC has been active, taking action against companies that used fake dating profiles to convince consumers to sign up for a dating service,3 and that sold fake followers, subscribers, and “likes” to enhance other companies’ and individuals’ social media influence.4
• According to Smith, companies should also be transparent about their collection of sensitive data and consumers’ choices with respect to the collection of that data. He noted that failing to do so could give rise to an FTC action, using the FTC’s recent action against Facebook as an example.5
• Finally, to be more transparent, Smith said that companies should give consumers adverse action notices where legally required to do so. He explained that companies that make automated decisions based on information received from a “consumer reporting agency” (CRA) may be required to give consumers adverse action notices under the Fair Credit Reporting Act (FCRA).6 For example, charging a consumer higher rent based on a risk score received from a background check company triggers a requirement under the FCRA to inform the consumer of his/her right to access the information received about them and correct inaccurate information.
• According to Smith, companies that deny consumers something of value based on algorithmic decision-making should be able to explain why. He noted that companies that use AI to make decisions about consumers in any context should be able to explain to consumers what data is used in their models and how that data is used to arrive at a decision.
• When using algorithms to assign risk scores to consumers, Smith explained that companies should also disclose the key factors that affected the score in order of importance.
• Smith also noted that companies should notify consumers if they change the terms of a deal based on automated tools. As an example, Smith noted that, over a decade ago, the FTC took action against a subprime credit marketer that failed to disclose that it used a behavioral scoring model to reduce consumers’ credit limits.7
• To ensure fairness, Smith encouraged companies to periodically test their algorithms. He explained that testing should take place before algorithms are used and periodically afterwards to ensure they do not discriminate against or disparately impact a protected class.
• Similarly, Smith suggested that companies evaluate inputs and outputs for potential discrimination issues. According to Smith, companies should review the types of information that go into their models to determine whether they include ethnically-based factors or proxies for such factors, such as census tract. He also said that companies should consider testing their outputs to ensure they are not discriminating against or disparately impacting a protected class.
• Smith also asked companies to consider giving consumers access to the information they use to make decisions about them and allowing consumers to dispute the accuracy of that information even if they are not legally required to do so under the FCRA.
• Smith flagged the importance of ensuring that data used in models is accurate and up to date. He explained that companies that provide consumer data to others who use the data to make eligibility decisions about consumers may be required to implement procedures to ensure the data is accurate and up to date under the FCRA. He also noted that companies that provide data about their customers to others who use it to make automated-decisions for eligibility purposes may have similar obligations under the FCRA.
• Smith encouraged companies to take steps to ensure their AI tools are “empirically derived, demonstrably and statistically sound.” For example, he suggested that companies assess whether their tools are: based on data derived from an empirical comparison of sample groups; developed, validated, and periodically reevaluated using accepted statistical principles and methodology; and adjusted as necessary to maintain predictive ability.
• To avoid risks of bias or other harm to consumers, Smith identified four key questions that companies should ask before using an algorithm: 1) how representative is your data set? 2) does your model account for biases? 3) how accurate are your predictions based on big data? and 4) does your reliance on big data raise ethical or fairness concerns?
• Smith noted that companies that sell AI tools to other businesses should consider whether access controls and other technologies can be used to prevent unauthorized use.
• Finally, Smith discussed the importance of evaluating accountability mechanisms, suggesting that companies consider using independent standards or experts to test their AI tools for risks of bias or other harm to consumers.
Compliments of Wilson Sonsini Goodrich & Rosati – a member of the EACCNY.