NewYorkUniversity
LawReview

Topics

Statutory Interpretation

Results

Cracking the Whole Code Rule

Anita S. Krishnakumar

Over the past three decades, since the late Justice Scalia joined the Court and ushered in a new era of text-focused statutory analysis, there has been a marked move towards the holistic interpretation of statutes and “making sense of the corpus juris.” In particular, Justices on the modern Supreme Court now regularly compare or analogize between statutes that contain similar words or phrases—what some have called the “whole code rule.” Despite the prevalence of this interpretive practice, however, scholars have paid little attention to how the Court actually engages in whole code comparisons on the ground.

This Article provides the first empirical and doctrinal analysis of how the modern Supreme Court uses whole code comparisons, based on a study of 532 statutory cases decided during the Roberts Court’s first twelve-and-a-half Terms. The Article first catalogues five different forms of whole code comparisons employed by the modern Court and notes that the different forms rest on different justifications, although the Court’s rhetoric has tended to ignore these distinctions. The Article then notes several problems, beyond the unrealistic one-Congress assumption identified by other scholars, that plague the Court’s current approach to most forms of whole code comparisons. For example, most of the Court’s statutory comparisons involve statutes that have no explicit connection to each other, and nearly one-third compare statutes that regulate entirely unrelated subject areas. Moreover, more than a few of the Court’s analogies involve generic statutory phrases—such as “because of” or “any”—whose meaning is likely to depend on context rather than some universal rule of logic or linguistics.

This Article argues that, in the end, the Court’s whole code comparisons amount to judicial drafting presumptions that assign fixed meanings to specific words, phrases, and structural choices. The Article critiques this judicial imposition of drafting conventions on Congress—noting that it is unpredictable, leads to enormous judicial discretion, reflects an unrealistic view of how Congress drafts, and falls far outside the judiciary’s institutional expertise. It concludes by recommending that the Court limit its use of whole code comparisons to situations in which congressional drafting practices, rule of law concerns, or judicial expertise justify the practice—e.g., where Congress itself has made clear that one statute borrowed from or incorporated the provisions of another, or where judicial action is necessary to harmonize two related statutes with each other.

Restoring the Historical Rule of Lenity as a Canon

Shon Hopwood

In criminal law, the venerated rule of lenity has been frequently, if not consistently, invoked as a canon of interpretation. Where criminal statutes are ambiguous, the rule of lenity generally posits that courts should interpret them narrowly, in favor of the defendant. But the rule is not always reliably used, and questions remain about its application. In this article, I will try to determine how the rule of lenity should apply and whether it should be given the status of a canon.

First, I argue that federal courts should apply the historical rule of lenity (also known as the rule of strict construction of penal statutes) that applied prior to the 1970s, when the Supreme Court significantly weakened the rule. The historical rule requires a judge to consult the text, linguistic canons, and the structure of the statute and then, if reasonable doubts remain, interpret the statute in the defendant’s favor. Conceived this way, the historical rule cuts off statutory purpose and legislative history from the analysis, and places a thumb on the scale in favor of interpreting statutory ambiguities narrowly in relation to the severity of the punishment that a statute imposes. As compared to the modern version of the rule of lenity, the historical rule of strict construction better advances democratic accountability, protects individual liberty, furthers the due process principle of fair warning, and aligns with the modified version of textualism practiced by much of the federal judiciary today.

Second, I argue that the historical rule of lenity should be deemed an interpretive canon and given stare decisis effect by all federal courts. If courts consistently applied historical lenity, it would require more clarity from Congress and less guessing from courts, and it would ameliorate some of the worst excesses of the federal criminal justice system, such as overcriminalization and overincarceration.

An Empirical Study of Statutory Interpretation in Tax Law

Jonathan H. Choi

A substantial academic literature considers how agencies should interpret statutes. But few studies have considered how agencies actually do interpret statutes, and none has empirically compared the methodologies of agencies and courts in practice. This Article conducts such a comparison, using a newly created dataset of all Internal Revenue Service (IRS) publications ever released, along with an existing dataset of court decisions. It applies natural language processing, machine learning, and regression analysis to map methodological trends and to test whether particular authorities have developed unique cultures of statutory interpretation. 

It finds that, over time, the IRS has increasingly made rules on normative policy grounds (like fairness and efficiency) rather than merely producing rules based on the “best reading” of the relevant statute (under any interpretive theory, like purposivism or textualism). Moreover, when the IRS does focus on the statute, it has grown much more purposivist over time. In contrast, the Tax Court has not grown more normative and has followed the same trend toward textualism as most other courts. But although the Tax Court has become more broadly textualist, it prioritizes different interpretive tools than other courts, like Chevron deference and holistic-textual canons of interpretation. This suggests that each authority adopts its own flavor of textualism or purposivism. 

These findings complicate the literature on tax exceptionalism and the judicial nature of the Tax Court. They also inform ongoing debates about judicial deference and the future of doctrines like Chevron and Skidmore deference. Most broadly, they provide an empirical counterpoint to the existing theoretical literature on statutory interpretation by agencies.