Silviu Pitis

PhD Student, Artificial Intelligence
University of Toronto
Vector Institute

silviu.pitis@gmail.com
@SilviuPitis

Bio

I am a PhD student with the University of Toronto Machine Learning Group and Vector Institute. My research focuses on the design of goals, rewards and abstractions for intelligent agents.

I completed my master’s in computer science (ML specialization) as part of Georgia Tech’s OMSCS program. Before this, I was a lawyer at Kirkland & Ellis in New York, where I worked on big corporate transactions (e.g., this and this). I also developed some technology for corporate lawyers.

Before becoming a lawyer I was a fairly successful online poker player.

I received my J.D. in 2014 from Harvard Law School, where I was a fellow at the Olin Center for Law, Economics, and Business. My undergrad was in finance and economics at the Schulich School of Business in Toronto.

Teaching

I am co-teaching Intro to Machine Learning (CSC 311) this semester (Fall 2020). The course website and schedule are now posted. If you are in my section, please use the Instructor email csc311-2020-09@cs.toronto.edu or the TA+Instructor email csc311-2020-09-tas@cs.toronto.edu.

I advise a small number of students on research in an informal capacity, sometimes jointly with my advisor Jimmy Ba or labmate Harris Chan. If you are an aspiring researcher and find our work interesting, please fill out our application form or email me directly.

Research

My goal is to understand and create intelligence. In the near term, I’m working toward building reinforcement learning agents that can pursue compositional goals and reason about things from multiple competing perspectives. For a slightly dated, but focused selection of my short term research questions, see here (1 p) or the Aug 2017 copy here (2 pp). Or see my recent papers below. I also have a blog, r2rt.com, where I post small or incomplete ideas and tutorials.

If you’re an aspiring researcher and find my work interesting, or are interested in collaborating, please reach out!

Papers

Normative Reward Design

Coming soon!

What properties should the reward function of an autonomous agent satisfy? My paper “Rethinking the Discount Factor in Reinforcement Learning” suggests there exist reasonable preference structures that the commonly used fixed discount reinforcement learning (RL) objective cannot represent. We build on this work and consider how such preference structures might arise.

Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning

Silviu Pitis*, Harris Chan*, Stephen Zhao, Bradly Stadie, Jimmy Ba. In Proceedings of the Thirty-seventh International Conference on Machine Learning (ICML 2020). Vienna, Austria, 2020. Presented at the Adaptive and Learning Agents (ALA) Workshop at AAMAS 2020 (Best Paper). (Arxiv, Talk, Code)

What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? Our MEGA and OMEGA agents set achievable goals in sparsely explored areas of the goal space to maximize the entropy of the historical achieved goal distribution. This lets them learn to navigate mazes and manipulate blocks with a fraction of the samples used by prior approaches.

Counterfactual Data Augmentation using Locally Factored Dynamics

Silviu Pitis, Elliot Creager, Animesh Garg. Preprint. Presented at the Object-Oriented Learning (OOL) at ICML 2020 (Outstanding Paper). (Arxiv, Talk, Code, OOL Workshop)

In this paper we proposed a local causal model (LCM) framework that captures the benefits of decomposition in settings where the global causal model is densely connected. We used our framework to design a local Counterfactual Data Augmentation (CoDA) algorithm that expands available training data with counterfactual samples by stitching together locally independent subsamples from the environment. Empirically, we showed that CoDA can more than double the sample efficiency and final performance of reinforcement learning agents in locally factored environments.

An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality

Silviu Pitis*, Harris Chan*, Kiarash Jamali, Jimmy Ba. In Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020). Addis Ababa, Ethiopia, 2020. (Arxiv, OpenReview, Talk, Code)

We propose novel neural network architectures, guaranteed to satisfy the triangle inequality, for purposes of (asymmetric) metric learning and modeling graph distances.

Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes

Silviu Pitis, Michael R. Zhang. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems 2020. Auckland, New Zealand, 2020. (Arxiv, Talk, Code)

How should one combine noisy information from diverse sources to make an inference about an objective ground truth? Past studies typically assume that noisy votes are identically and independently distributed (i.i.d.), but this assumption is often unrealistic. Instead, we assume that votes are independent but not necessarily identically distributed and that our ensembling algorithm has access to certain auxiliary information related to the underlying model governing the noise in each vote. In this paper we propose a multi-arm bandit noise model and count-based auxiliary information and derive maximum likelihood aggregation rules for ranked and cardinal votes under our noise model. We find that our rules successfully use auxiliary information to outperform the naive baselines.

Fixed-Horizon Temporal Difference Methods for Stable Reinforcement Learning

Kristopher De Asis, Alan Chan, Silviu Pitis, Richard S. Sutton, Daniel Graves. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20). New York, USA, 2020. (Arxiv, Talk)

We explore fixed-horizon temporal difference (TD) methods, reinforcement learning algorithms for a new kind of value function that predicts the sum of rewards over a fixed number of future time steps. To learn the value function for horizon h, these algorithms bootstrap from the value function for horizon h−1, or some shorter horizon. Because no value function bootstraps from itself, fixed-horizon methods are immune to the stability problems that plague other off-policy TD methods using function approximation.

ProtoGE: Prototype Goal Encodings for Multi-goal Reinforcement Learning

Silviu Pitis*, Harris Chan*, Jimmy Ba. In Proceedings of the The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM2019). Montreal, Canada, 2019. (Paper)

Humans can often accomplish specific goals more readily that general ones. Although more specific goals are, by definition, more challenging to accomplish than more general goals, evidence from management and educational sciences supports the idea that “specific, challenging goals lead to higher performance than easy goals”. We find evidence of this same effect for reinforcement learning (RL) agents in multi-goal environments. Our work establishes a new state-ofthe-art in standard multi-goal MuJoCo environments and suggests several novel research directions.

Rethinking the Discount Factor in Reinforcement Learning: A Decision Theoretic Approach

Silviu Pitis. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19). Honolulu, USA, 2019. (Paper, Slides, Poster)

Can all “rational” preference structures be represented using the standard RL model (the MDP)? This paper presents a minimal axiomatic framework for rationality in sequential decision making and shows that the implied cardinal utility function is of a more general form than the discounted additive utility function of an MDP. In particular, the developed framework allows for a state-action dependent “discount” factor that is not constrained to be less than 1 (so long as there is eventual long run discounting).

Challenging the MDP Status Quo: An Axiomatic Approach to Rationality for Reinforcement Learning Agents

Silviu Pitis. Workshop Paper. The 1st Workshop on Goal Specifications for Reinforcement Learning, FAIM 2018. Stockholm, Sweden, 2018. (Paper, Poster)

This is the workshop version of the above full paper.

Source Traces for Temporal Difference Learning

Silviu Pitis. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). New Orleans, USA, 2018. (Paper, Slides)

This paper develops source traces for reinforcement learning agents. Source traces provide agents with a causal model, and are related to both eligibility traces and the successor representation. They allow agents to propagate surprises (temporal differences) to known potential causes, which speeds up learning. One of the interesting things about source traces is that they are time-scale invariant, and could potentially be used to provide interpretable answers to questions of causality, such as “What is likely to cause X?”

I am currently working on extending this idea to continuous control with deep neural networks.

Reasoning for Reinforcement Learning

Silviu Pitis. Workshop Paper. NIPS Hierarchical Reinforcement Learning Workshop. Long Beach, USA, 2017. (Paper, Poster)

This is a short abstract about some ideas I’m currently working on that connect implicit understanding (value functions) and explicit reasoning in the context of reinforcement learning. The idea is to create an architecture that is capable of integrating and simultaneously reasoning over multiple representations at different levels of abstraction.

Methods for Retrieving Alternative Contract Language Using a Prototype

Silviu Pitis. In Proceedings of ICAIL ‘17: Sixteenth International Conference on Law and Artificial Intelligence. London, UK, 2017 (Best Student Paper). (Paper, Slides)

This paper presents a search engine that finds similar language to a given query (the prototype) in a database of contracts. Results are clustered so as to maximize both coverage and diversity. This is useful for contract drafting and negotiation, administrative tasks and legal research.

An Alternative Arithmetic for Word Vector Analogies

Silviu Pitis. June 7, 2016. (Paper)

This paper looks at word vector arithmetic of the type “king - man + woman = queen” and investigates treating the relationships between word vectors as rotations of the embedding space instead of as vector differences. This was a one week project of little practical significance, but with the advent of latent vector arithmetic (e.g., for GANs), it may be worth revisiting.

Punitive Damages in International Trade

Silviu Pitis. April 22, 2014. (Paper)

What should the structure of rights, remedies, and enforcement look like in an efficient international trade agreement? In particular, do punitive damages have a place?

Designing Optimal Takeover Defenses

Silviu Pitis. May 22, 2013. (Paper)

This paper analyzes the economic value of corporate takeover defenses, and argues for designing intermediate takeover defenses that balance (1) the interest of shareholders in management’s exploitation of insider information and (2) the entrenchment interest of management.

Examining Expected Utility Theory from Descriptive and Prescriptive Perspectives

Silviu Pitis. January 2, 2010. (Paper)

This paper examines the history and validity of Expected Utility theory, with focus on its failures a descriptive model of human decisions. It is argued that the descriptive failures of Expected Utility lead to its incorrect usage as a prescriptive model, and a few brief examples of how one might properly construct a utility function are provided.

I’ve been interested in legal technology since my first law firm experience in the summer of 2013: the tech was underwhelming, and I spent hours on tasks that would take the right program seconds. This prompted me to write this paper (~6000 words) on potential tools for corporate lawyers and take my first formal programming class as an elective in my final year of law school.

During my time at Kirkland, I wrote a number of useful programs, which are summarized here (1 page) along with some other ideas I think would be useful. Here is some unsolicited praise for my legal software:

If it’s something you’re interested in, there is a decent opportunity for commercialization in this sector—I may be open to discussing. I haven’t pursued it due to my other interests (ironically, it was my desire to build smarter legal programs that led me to teach myself about machine learning and pursue my current non-law related research).

Random

Connect

silviu.pitis@gmail.com

spitis