Since April 2016 I’ve been teaching myself machine learning and artificial intelligence and working on my own research full time. I’m currently living in Boston and recently finished my master’s in computer science at Georgia Tech’s OMSCS program. In Fall 2018, I will be starting my PhD with the University of Toronto ML Group and the Vector Institute of Artificial Intelligence under the supervision of Professor Jimmy Ba.
Before becoming a lawyer I was a fairly successful online poker player.
I’m interested in understanding and creating intelligence. For a focused selection of short term research questions I have, see here (1 page) or the August 2017 copy here (2 pages). Or see my recent papers. I also have an academic blog, r2rt.com, where I post small or incomplete ideas and tutorials.
Most recently, I’ve been working on enhancing reinforcement learning agents with source traces, a time-scale invariant model for probabalistic causation. Separately, I’ve been working toward developing an interpretable mechanism for reasoning over multiple representations. I see both avenues as critical for the development of artificial general intelligence, as any agent of general intelligence will need to answer the question “Why?”. This demands that the agent reason about causation; for humans, this typically involves generating multiple competing explanations, comparing their relative merits, and producing a “best” explanation along with a statement about its plausibility.
This paper develops source traces for reinforcement learning agents. Source traces provide agents with a causal model, and are related to both eligibility traces and the successor representation. They allow agents to propagate surprises (temporal differences) to known potential causes, which speeds up learning. One of the interesting things about source traces is that they are time-scale invariant, and could potentially be used to provide interpretable answers to questions of causality, such as “What is likely to cause X?”
This is a short abstract about some ideas I’m currently working on that connect implicit understanding (value functions) and explicit reasoning in the context of reinforcement learning. The idea is to create an architecture that is capable of integrating and simultaneously reasoning over multiple representations at different levels of abstraction.
This paper presents a search engine that finds similar language to a given query (the prototype) in a database of contracts. Results are clustered so as to maximize both coverage and diversity. This is useful for contract drafting and negotiation, administrative tasks and legal research.
This paper looks at word vector arithmetic of the type “king - man + woman = queen” and investigates treating the relationships between word vectors as rotations of the embedding space instead of as vector differences. This was a one week project of little practical significance, but with the advent of latent vector arithmetic (e.g., for GANs), it may be worth revisiting.
What should the structure of rights, remedies, and enforcement look like in an efficient international trade agreement? In particular, do punitive damages have a place?
This paper analyzes the economic value of corporate takeover defenses, and argues for designing intermediate takeover defenses that balance (1) the interest of shareholders in management’s exploitation of insider information and (2) the entrenchment interest of management.
This paper examines the history and validity of Expected Utility theory, with focus on its failures a descriptive model of human decisions. It is argued that the descriptive failures of Expected Utility lead to its incorrect usage as a prescriptive model, and a few brief examples of how one might properly construct a utility function are provided.
I’ve been interested in legal technology since my first law firm experience in the summer of 2013: the tech was underwhelming, and I spent hours on tasks that would take the right program seconds. This prompted me to write this paper (~6000 words) on potential tools for corporate lawyers and take my first formal programming class as an elective in my final year of law school.
During my time at Kirkland, I wrote a number of useful programs, which are summarized here (1 page) along with some other ideas I think would be useful. Here is some unsolicited praise for my legal software:
If it’s something you’re interested in, there is a decent opportunity for commercialization in this sector—I may be open to discussing. I haven’t pursued it due to my other interests (ironically, it was my desire to build smarter legal programs that led me to teach myself about machine learning and pursue my current non-law related research).
I keep an academic ML/AI blog at r2rt.com.
I used to keep an economics blog.
If you’re a hedge fund manager you may be interested in this triple tax arbitrage scheme I came up with.
Georgia Tech’s ML class has four fairly involved projects. If you’re interested in mine for whatever reason (all scored 100/100): Supervised Learning, Randomized Optimization, Unsupervised Learning, Markov Decision Processes.