Bending the Law

  title={Bending the Law},
  author={Gregory Leibon and Michael A. Livermore and Reed H. Harder and Allen B. Riddell and Daniel N. Rockmore},
Legal reasoning requires identification, through search, of authoritative legal texts (such as statutes, constitutions, or prior judicial decisions) that apply to a given legal question. In this paper we model the concept of the law search as an organizing principle in the evolution of the corpus of legal texts, apply that model to U.S. Supreme Court opinions. We examine the underlying navigable geometric and topological structure of the Supreme Court opinion corpus (the "opinion landscape… 
Computationally Assisted Regulatory Participation
With the increased politicization of agency rulemaking and the reduced cost of participating in the notice-and-comment rulemaking process, administrative agencies have, in recent years, found
A Collective Failure of Nerve
William Simon's searching analysis of the Kaye Scholer affair and especially of the legal profession's response to the Office of Thrift Supervision's (OTS's) proceedings against Kaye Scholer, throws
Learning Policy Levers: Toward Automated Policy Classification Using Judicial Corpora
A semi-supervised multi-class learning model is proposed and implemented, with the training set being a hand-coded dataset of thousands of cases in over 20 politically salient policy topics and can classify labeled cases by topic correctly 91% of the time.


The authority of Supreme Court precedent
It is shown that reversed cases tend to be much more important than other decisions, and the cases that overrule them quickly become and remain even more important as the reversed decisions decline.
Law’s Algorithm
An historical, theoretical and practical perspective on law as an information technology is offered and it is shown that legal search translates the uncompressed form of legal information into an algorithm for predicting what the law will be in a particular situation.
Since the early 1960s, computerized legal research technology has enabled judges and their law clerks to access legal information quickly and comprehensively. Particularly for appellate judges, who
A topic model approach to studying agenda formation for the U.S. Supreme Court
Study of agenda formation in the U.S. Supreme Court is one of the most long-standing in empirical legal studies. This paper exploits a relatively new approach to quantitative text analysis - topic
Law as a seamless web?: comparison of various network representations of the United States Supreme Court corpus (1791-2005)
In this paper, we compare several network representations of the corpus of United States Supreme Court decisions (1791--2005). This corpus is not only of seminal importance, but also represents a
Network Analysis and the Law: Measuring the Legal Importance of Supreme Court Precedents
We construct the complete network of 28,951 majority opinions written by the U.S. Supreme Court and the cases they cite from 1792 to 2005. We illustrate some basic properties of this network and then
Legal Research and Legal Concepts: Where Form Molds Substance
When Christopher Columbus Langdell stated that the library was the laboratory of the law and that law books were the "stuff" of legal research he was stating a proposition that was not only
The Fable of the Codes: The Efficiency of the Common Law, Legal Origins & Codification Movements
The superior efficiency of the common law has long been a staple of the law and economics literature. Generalizing from this claim, the legal origins literature uses cross-country empirical research
SMART Electronic Legal Discovery Via Topic Modeling
This paper considers representing documents in a topic space using the well-known topic models such as latent Dirichlet allocation and latent semantic indexing, and solving the information retrieval problem via finding document similarities in the topic space rather doing it in the corpus vocabulary space.
The Anatomy of a Large-Scale Hypertextual Web Search Engine
This paper provides an in-depth description of Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and looks at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.