Racial Bias in

Algorithmic Patenting

 

 

 

Dan L. Burk

University of California, Irvine

 

 

Machine learning systems, a form of artificial intelligence, are increasingly being deployed across a range of social practices, including both the development of innovative technologies and the administration of rights associated with those technologies.  Among many other applications, AI systems promise better designs for mechanical and electrical inventions, targeted development of chemical structures, and optimized outcomes for methods of pharmaceutical treatment.  Current proposals also contemplate the use of AI systems in the examination of patent applications, in determining metrics for statutory obviousness, and other procedural applications.

 

At the same time, evidence of racial bias in the patent system is manifest and growing.   As with many social practices, patent standards and practices appear to systematically exclude disfavored minorities.  Only a small fraction of U.S. patent applications name African-American inventors, and other minority populations appear to be similarly underrepresented.  Legal historians have begun to trace the discriminatory legal and social factors that have culminated in this outcome.  Neither is such discrimination a historical relic; current data indicates that patent applications naming minority inventors are more likely to be denied or narrowed.  Even if explicit exclusory policies have been remedied, stark evidence of implicit biases remains.

 

Some legal scholars have already noted that as AI becomes part of the patent landscape, the biases present in existing patent norms and practices systems will inevitably infect algorithmic processes trained on data from past practices.  This can be expected both in the development of new inventions and in the legal administration of rights through AI.  A variety of solutions to this outcome, such as greater procedural transparency has been proposed.  But since racial bias is already endemic throughout the patent system, one might argue that in one sense racial bias in AI patenting poses no new problems.  AI systems may perpetuate current biases, but this is not a problem unique to, or arising from the use of AI systems.  

 

This suggests that systemic bias in algorithmic decisions or algorithm-assisted decisions may not be any worse, even if not any better, than patenting decisions currently made by humans.  The solution to racial biases in the patent system may be, as suggested by Anupam Chander, to strive to root out discriminatory outcomes from whatever source, focusing on corrective measures that ameliorate the discriminatory result rather than worrying about its origins.  Racial bias in the current patent system is undesirable, but the addition of AI systems requires special consideration only to the extent that they present special problems.  Solutions oriented toward AI are needed if we expect that the result of algorithmic biases will be in some dimension worse than current discriminatory outcomes.

 

In this essay I begin identifying such social bias problems that are particular to the algorithmic determinations through AI processing.  Data on the sociology of “algorithmic living” indicates some such differences.  One set of problems relates to the illusion of numerical objectivity: humans tend to lend too much credence to algorithmic outcomes, improperly viewing them as unbiased or neutral.  Consequently, AI outputs tend to be assigned undue weight that would not be accorded to more familiar institutional processes.  A second set of problems relates to the performative nature of algorithmic processes; they tend to produce the effects that they assume, creating their own social facts.  Humans interacting with algorithmic outcomes tend to alter their behavior to conform with the inputs that led to the algorithmic result, creating behavioral feedback loops that become self-fulfilling prophecies.  This is particularly problematic if racial biases are embedded in such processes.  Identifying these problems indicates that currently proposed solutions will be inadequate, and points toward a different approach to dealing with racial bias in algorithmic patenting.