PhD Student, Machine Learning
University of Toronto
Vector Institute
I am a PhD student with the University of Toronto Machine Learning Group and Vector Institute working with Jimmy Ba. My research focuses on the design of goals, rewards and abstractions for intelligent agents.
I completed my master’s in computer science (ML specialization) as part of Georgia Tech’s OMSCS program. Before this, I was a lawyer at Kirkland & Ellis in New York, where I worked on big corporate transactions (e.g., this and this). I also developed some technology for corporate lawyers.
Before becoming a lawyer I was a fairly successful online poker player.
I received my J.D. in 2014 from Harvard Law School, where I was a fellow at the Olin Center for Law, Economics, and Business. My undergrad was in finance and economics at the Schulich School of Business in Toronto.
I am teaching Introduction to Machine Learning (CSC 311) this semester (Fall 2020). If you are emailing me about the course, please use the Instructor email csc311-2020-09@cs.toronto.edu.
I advise a small number of students on research in an informal capacity, sometimes jointly with my advisor Jimmy Ba or labmate Harris Chan. If you are an aspiring researcher and find our work interesting, please fill out our application form or email me directly.
My goal is to understand and create intelligence. I’m currently working toward (1) a normatively justified framework for goal representation, and (2) reinforcement learning agents that can pursue compositional goals and reason about things from multiple competing perspectives. For summaries of my current and historical research interests, you’re welcome to browse the agendas below:
Or check out my papers below. If we share research interests or you have an idea you’d like to collaborate on, I’d be excited to talk to you!
What properties should the reward function of an autonomous agent satisfy? My paper “Rethinking the Discount Factor in Reinforcement Learning” suggests there exist reasonable preference structures that the commonly used fixed discount reinforcement learning (RL) objective cannot represent. We build on this work and consider how such preference structures might arise.
In this paper we proposed a local causal model (LCM) framework that captures the benefits of decomposition in settings where the global causal model is densely connected. We used our framework to design a local Counterfactual Data Augmentation (CoDA) algorithm that expands available training data with counterfactual samples by stitching together locally independent subsamples from the environment. Empirically, we showed that CoDA can more than double the sample efficiency and final performance of reinforcement learning agents in locally factored environments.
What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? Our MEGA and OMEGA agents set achievable goals in sparsely explored areas of the goal space to maximize the entropy of the historical achieved goal distribution. This lets them learn to navigate mazes and manipulate blocks with a fraction of the samples used by prior approaches.
We propose novel neural network architectures, guaranteed to satisfy the triangle inequality, for purposes of (asymmetric) metric learning and modeling graph distances.
How should one combine noisy information from diverse sources to make an inference about an objective ground truth? Past studies typically assume that noisy votes are identically and independently distributed (i.i.d.), but this assumption is often unrealistic. Instead, we assume that votes are independent but not necessarily identically distributed and that our ensembling algorithm has access to certain auxiliary information related to the underlying model governing the noise in each vote. In this paper we propose a multi-arm bandit noise model and count-based auxiliary information and derive maximum likelihood aggregation rules for ranked and cardinal votes under our noise model. We find that our rules successfully use auxiliary information to outperform the naive baselines.
We explore fixed-horizon temporal difference (TD) methods, reinforcement learning algorithms for a new kind of value function that predicts the sum of rewards over a fixed number of future time steps. To learn the value function for horizon h, these algorithms bootstrap from the value function for horizon h−1, or some shorter horizon. Because no value function bootstraps from itself, fixed-horizon methods are immune to the stability problems that plague other off-policy TD methods using function approximation.
Humans can often accomplish specific goals more readily that general ones. Although more specific goals are, by definition, more challenging to accomplish than more general goals, evidence from management and educational sciences supports the idea that “specific, challenging goals lead to higher performance than easy goals”. We find evidence of this same effect for reinforcement learning (RL) agents in multi-goal environments. Our work establishes a new state-ofthe-art in standard multi-goal MuJoCo environments and suggests several novel research directions.
Can all “rational” preference structures be represented using the standard RL model (the MDP)? This paper presents a minimal axiomatic framework for rationality in sequential decision making and shows that the implied cardinal utility function is of a more general form than the discounted additive utility function of an MDP. In particular, the developed framework allows for a state-action dependent “discount” factor that is not constrained to be less than 1 (so long as there is eventual long run discounting).
This is the workshop version of the above full paper.
This paper develops source traces for reinforcement learning agents. Source traces provide agents with a causal model, and are related to both eligibility traces and the successor representation. They allow agents to propagate surprises (temporal differences) to known potential causes, which speeds up learning. One of the interesting things about source traces is that they are time-scale invariant, and could potentially be used to provide interpretable answers to questions of causality, such as “What is likely to cause X?”
I am currently working on extending this idea to continuous control with deep neural networks.
This is a short abstract about some ideas I’m currently working on that connect implicit understanding (value functions) and explicit reasoning in the context of reinforcement learning. The idea is to create an architecture that is capable of integrating and simultaneously reasoning over multiple representations at different levels of abstraction.
This paper presents a search engine that finds similar language to a given query (the prototype) in a database of contracts. Results are clustered so as to maximize both coverage and diversity. This is useful for contract drafting and negotiation, administrative tasks and legal research.
This paper looks at word vector arithmetic of the type “king - man + woman = queen” and investigates treating the relationships between word vectors as rotations of the embedding space instead of as vector differences. This was a one week project of little practical significance, but with the advent of latent vector arithmetic (e.g., for GANs), it may be worth revisiting.
What should the structure of rights, remedies, and enforcement look like in an efficient international trade agreement? In particular, do punitive damages have a place?
This paper analyzes the economic value of corporate takeover defenses, and argues for designing intermediate takeover defenses that balance (1) the interest of shareholders in management’s exploitation of insider information and (2) the entrenchment interest of management.
This paper examines the history and validity of Expected Utility theory, with focus on its failures a descriptive model of human decisions. It is argued that the descriptive failures of Expected Utility lead to its incorrect usage as a prescriptive model, and a few brief examples of how one might properly construct a utility function are provided.
I’ve been interested in legal technology since my first law firm experience in the summer of 2013: the tech was underwhelming, and I spent hours on tasks that would take the right program seconds. This prompted me to write this paper (~6000 words) on potential tools for corporate lawyers and take my first formal programming class as an elective in my final year of law school.
During my time at Kirkland, I wrote a number of useful programs, which are summarized here (1 page) along with some other ideas I think would be useful. Here is some unsolicited praise for my legal software:
If it’s something you’re interested in, there is a decent opportunity for commercialization in this sector—I may be open to discussing. I haven’t pursued it due to my other interests (ironically, it was my desire to build smarter legal programs that led me to teach myself about machine learning and pursue my current non-law related research).
I keep an academic ML/AI blog at r2rt.com.
I used to keep an economics blog.
If you’re a hedge fund manager you may be interested in this triple tax arbitrage scheme I came up with.