Sean Sinclair



Something went wrong I am a fourth year PhD student in Operations Research and Information Engineering at Cornell University coadvised by Christina Lee Yu and Siddhartha Banerjee. I completed a BSc in Mathematics and Computer Science at McGill University where I worked on a project with Tony Humphries. Before returning to graduate school I spent two and a half years teaching mathematics, science, and English in a small community in rural Ghana with the Peace Corps, and after worked at National Life as a financial analyst.

In general, I am interested in machine learning, statistics, and differential equations. My current work is on integrating techniques from reinforcement learning and model-predictive control to models arising in operations research. Two issues arise when attempting to apply RL to OR problems. First, many RL algorithms ignore prior knowledge and learn from scratch (vogue in the research community) and have a prohibitive sample complexity of learning. On the other hand, designing a high-fidelity simulator for any given problem is not realistic due to the complexities in modeling feedback effects. As such, I am interesting in understanding common structure arising in operational problems and designing algorithms tailored at exploiting that structure.

During undergraduate studies at McGill I worked on studying numerical methods for distributed delay differential equations. In these equations, the definition of the differential equation itself involves an integral of the solution over the past. This integration can be finite, unbounded, or state dependent. The current MATLAB solver only solves these equations with discrete delays. I worked on developing simple first and second order MATLAB solver for a class of distributed delay differential equations using Runge-Kutta methods.

I was visiting graduate student at the Simons Institute for the Theory of Reinforcement Learning program in the Fall of 2020. During the Summer of 2021 I was a research intern at Microsoft Research under Adith Swaminathan.

Office: 294 Rhodes Hall
Email: srs429 at cornell.edu
Github
Google Scholar

Publications and Working Papers

Hindsight Learning in MDPs with Exogenous Inputs Under Review
Sean R. Sinclair, Felipe Frujeri, Ching-An Cheng, and Adith Swaminathan.

Adaptive Discretization in Online Reinforcement Learning [arXiv] Submitted
Sean R. Sinclair, Siddhartha Banerjee, and Christina Lee Yu.

Sequential Fair Allocation: Achieving the Optimal Envy-Efficiency Tradeoff Curve [arXiv] [video] Submitted
Sean R. Sinclair, Siddhartha Banerjee, and Christina Lee Yu.

Sequential Fair Allocation of Limited Resources under Stochastic Demands [arXiv] Submitted
Sean R. Sinclair, Gauri Jain, Siddhartha Banerjee, and Christina Lee Yu.

Adaptive Discretization for Model-Based Reinforcement Learning [arXiv] [video] Accepted to NeurIPS 2020 (Poster)
Sean R. Sinclair, Tianyu Wang, Gauri Jain, Siddhartha Banerjee, and Christina Lee Yu.

Adaptive Discretization for Episodic Reinforcement Learning in Metric Spaces [arXiv] [ACM] [video] ACM POMACS
Sean R. Sinclair, Siddhartha Banerjee, and Christina Lee Yu.

Normal and pathological dynamics of platelets in humans [arXiv] [springer] Journal of Mathematical Biology
GP Langlois, M Craig, AR Humphries, MC Mackey, et all.