Cyrus Cousins

The Life and Times of Cyrus Cousins 🔊

Finitely Wise, Infinitely Curious

About Me

Abridged Biography

I am Cyrus Cousins, a postdoctoral scholar at the Duke and Carnegie Mellon Universities, with professors Walter Sinnott-Armstrong, Jana Schaich Borg, Vincent Conitzer, and Hoda Heidari, and formerly a visiting assistant professor at Brown University, where I also completed my doctoral studies under the tutelage of Eli Upfal. Before arriving at Brown University, I earned my undergraduate degree in computer science, mathematics, and biology from Tufts University. Known to some as The Count of Monte Carlo, I study all manner of problems involving sampling, randomization, and learning in data science and beyond, with a particular interest in uniform convergence theory and the rigorous treatment of fair machine learning. My work studies theoretical bounds on generalization error in exotic settings, and applies such bounds to tasks of real-world interest, most notably in data science, empirical game theory, and fair machine learning.

My dissertation Bounds and Applications of Concentration of Measure in Fair Machine Learning and Data Science received the Joukowsky Outstanding Dissertation Prize, and I have been awarded the Dean's Faculty Fellowship visiting assistant professorship at Brown University and the CDS Postdoctoral Fellowship postdoctoral scholarship at the University of Massachusetts Amherst.

My favorite theorem is the Dvoretzky-Kiefer-Wolfowitz Inequality, and my favorite algorithm is simulated annealing.

Research Statements and Application Materials

  1. Research Statement

  2. Curriculum Vitae

For posterity and the benefit of future applicants, my older research statements (graduate school, etc.) are also available.

Research Overview

In my research, I strive for a delicate balance between theory and practice. My theory work primarily lies in sample complexity analysis for machine learning, as well as time complexity analysis and probabilistic guarantees for efficient sampling-based approximation algorithms and data science methods [1] [2] [3] [4]. In addition to statistical analysis, much of my work deals with delicate computational questions, like how to optimally characterize and bound the sample-complexity of estimation tasks from data (with applications to oblivious algorithms, which achieve near-optimal performance while requiring limited a priori knowledge), as well as the development of fair-PAC learning, with the accompanying computational and statistical reductions between classes of learnable models.

On the practical side, much of my early work was led by the observation that modern methods in statistical learning theory (e.g., Rademacher averages) often yield vacuous or unsatisfying guarantees, so I strove to understand why, and to show sharper bounds, with particular emphasis on constant factors and performance in the small sample setting. From there, I have worked to apply statistical methods developed for these approaches to myriad practical settings, including statistical data science tasks, the analysis of machine learning, and fairness sensitive machine learning algorithms.

By blurring the line betwixt theory and practice, I have been able to adapt rigorous theoretical guarantees to novel settings. For example, my work on adversarial learning from weak supervision stemmed from a desire to apply statistical learning theory techniques in absentia of sufficient labeled data. Conversely, I have also motivated novel theoretical problems via practical considerations and interdisciplinary analysis; my work in fair machine learning led to the fair-PAC learning formalism, where power-means over per-group losses (rather than averages) are minimized. The motivation to optimize power-means derives purely from the economic theory of cardinal welfare, but the value of this learning concept only becomes apparent when one observes that many of the desirable (computational and statistical) properties of risk minimization translate to power-mean minimization.

Cyrus Cousins


Lightning Talk

When Statistics Eclipse Fairness

This brief talk outlines my approach to fair machine learning and briefly highlights some of my most important results. Slide Deck



News

2024

2023

2022

2021

2020

2019

  • I will be returning to the Labs group at Two Sigma Investments to work with Larry Rudolph.

2018

  • I have accepted a summer internship offer with Two Sigma Investments, and will be working with Matteo Riondato in the Labs group.
Random Radio.

RLC 2024 Awards Ceremony


Random Radio.

Random Radio Streaming


Celebrating a successful defence!

A Thesis, Defended



Major Projects



A Complete List of Publications

[1] Cyrus Cousins, Sheshera Mysore, Neha Nayak-Kennard, and Yair Zick. Who you gonna call? Optimizing expert assignment with predictive models. In 1st Annual Workshop on Incentives in Academia. Economics and Computation, 2024.
[2] Cyrus Cousins, Elita Lobo, Justin Payan, and Yair Zick. Fair and welfare-efficient resource allocation under uncertainty. In 1st Annual Workshop on Incentives in Academia. Economics and Computation, 2024.
[3] Paula Navarrete, Cyrus Cousins, George Bissias, and Yair Zick. Deploying fair and efficient course allocation mechanisms. In 1st Annual Workshop on Incentives in Academia. Economics and Computation, 2024.
[4] Cyrus Cousins. Algorithms and analysis for optimizing robust objectives in fair machine learning. arXiv preprint arXiv:2404.06703, 2024.
[5] Cyrus Cousins, Elita Lobo, Kavosh Asadi, and Michael L. Littman. On welfare-centric fair reinforcement learning. Reinforcement Learning Journal, 1(1), 2024.
[6] Cyrus Cousins, Indra Elizabeth Kumar, and Suresh Venkatasubramanian. To pool or not to pool: Analyzing the regularizing effects of group-fair training on shared models. In Artificial Intelligence and Statistics (AISTATS), 2024.
[7] Elita Lobo, Cyrus Cousins, Marek Petrik, and Yair Zick. Percentile criterion optimization in offline reinforcement learning. In Advances in Neural Information Processing Systems, 2023.
[8] Cyrus Cousins. Algorithms and analysis for optimizing robust objectives in fair machine learning. In Columbia Workshop on Fairness in Operations and AI. Columbia University, 2023.
[9] Cyrus Cousins, Elita Lobo, Justin Payan, and Yair Zick. Fair resource allocation under uncertainty. In Columbia Workshop on Fairness in Operations and AI. Columbia University, 2023.
[10] Paula Navarrete, Cyrus Cousins, Yair Zick, and Vignesh Viswanathan. Efficient yankee swap for fairly allocating courses to students. In Columbia Workshop on Fairness in Operations and AI. Columbia University, 2023.
[11] Cyrus Cousins, Vignesh Viswanathan, and Yair Zick. The good, the bad and the submodular: Fairly allocating mixed manna under order-neutral submodular preferences. In International Conference on Web and Internet Economics. Springer, 2023.
[12] Cyrus Cousins, Vignesh Viswanathan, and Yair Zick. Dividing good and better items among agents with submodular valuations. In International Conference on Web and Internet Economics. Springer, 2023.
[13] Cyrus Cousins, Justin Payan, and Yair Zick. Into the unknown: Assigning reviewers to papers with uncertain affinities. In Proceedings of the 16th International Symposium on Algorithmic Game Theory, 2023.
[14] Cyrus Cousins, Chloe Wohlgemuth, and Matteo Riondato. BAVarian: Betweenness centrality approximation with variance-aware Rademacher averages. ACM Transactions on Knowledge Discovery from Data (TKDD), 17(6):1–47, 2023. [ DOI ]
[15] Cyrus Cousins, Bhaskar Mishra, Enrique Areyan Viqueira, and Amy Greenwald. Learning properties in simulation-based games. In Proceedings of the 22nd International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), 2023.
[16] Cyrus Cousins. Revisiting fair-PAC learning and the axioms of cardinal welfare. In Artificial Intelligence and Statistics (AISTATS), 2023.
[17] Leonardo Pellegrina, Cyrus Cousins, Fabio Vandin, and Matteo Riondato. MCRapper: Monte-carlo Rademacher averages for POSET families and approximate pattern mining. ACM Transactions on Knowledge Discovery from Data (TKDD), 16(5), 2022.
[18] Bhaskar Mishra, Cyrus Cousins, and Amy Greenwald. Regret pruning for learning equilibria in simulation-based games. arXiv:2211.16670, 2022.
[19] Cyrus Cousins, Bhaskar Mishra, Enrique Areyan Viqueira, and Amy Greenwald. Computational and data requirements for learning generic properties of simulation-based games. arXiv:2208.06400, 2022.
[20] Cyrus Cousins, Kavosh Asadi, and Michael L. Littman. Fair E3: Efficient welfare-centric fair reinforcement learning. In 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM), 2022.
[21] Evan Dong and Cyrus Cousins. Decentering imputation: Fair learning at the margins of demographics. In Queer in AI Workshop @ ICML, 2022.
[22] Cyrus Cousins. Uncertainty and the social planner’s problem: Why sample complexity matters. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022.
[23] Cyrus Cousins. Bounds and Applications of Concentration of Measure in Fair Machine Learning and Data Science. PhD thesis, Brown University, 2021.
[24] Cyrus Cousins, Chloe Wohlgemuth, and Matteo Riondato. BAVarian: Betweenness centrality approximation with variance-aware Rademacher averages. In Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2021.
[25] Enrique Areyan Viqueira, Cyrus Cousins, and Amy Greenwald. Learning competitive equilibria in noisy combinatorial markets. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), 2021.
[26] Cyrus Cousins. An axiomatic theory of provably-fair welfare-centric machine learning. arXiv:2104.14504, 2021.
[27] Cyrus Cousins. An axiomatic theory of provably-fair welfare-centric machine learning. In Advances in Neural Information Processing Systems, 2021.
[28] Cyrus Cousins, Shahrzad Haddadan, Yue Zhuang, and Eli Upfal. Fast doubly-adaptive MCMC to estimate the Gibbs partition function with weak mixing time bounds. In Advances in Neural Information Processing Systems, 2021.
[29] Alessio Mazzetto, Cyrus Cousins, Dylan Sam, Stephen H. Bach, and Eli Upfal. Adversarial multiclass learning under weak supervision with performance guarantees. In International Conference on Machine Learning, page 7534–7543. PMLR, 2021.
[30] Cyrus Cousins. Novel concentration of measure bounds with applications to fairness in machine learning. Brown University, 2020.
[31] Cyrus Cousins, Shahrzad Haddadan, and Eli Upfal. Making mean-estimation more efficient using an MCMC trace variance approach: DynaMITE. arXiv:2011.11129, 2020.
[32] Leonardo Pellegrina, Cyrus Cousins, Fabio Vandin, and Matteo Riondato. MCRapper: Monte-Carlo Rademacher averages for POSET families and approximate pattern mining. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, page 2165–2174, 2020.
[33] Cyrus Cousins and Matteo Riondato. Sharp uniform convergence bounds through empirical centralization. In Advances in Neural Information Processing Systems, 2020.
[34] Enrique Areyan Viqueira, Cyrus Cousins, and Amy Greenwald. Improved algorithms for learning equilibria in simulation-based games. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), page 79–87, 2020.
[35] Enrique Areyan Viqueira, Cyrus Cousins, Yasser Mohammad, and Amy Greenwald. Empirical mechanism design: Designing mechanisms from data. In Uncertainty in Artificial Intelligence, page 1094–1104. PMLR, 2020.
[36] Enrique Areyan Viqueira, Cyrus Cousins, Eli Upfal, and Amy Greenwald. Learning equilibria of simulation-based games. arXiv:1905.13379, 2019.
[37] Enrique Areyan Viqueira, Cyrus Cousins, and Amy Greenwald. Learning simulation-based games from data. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), 2019.
[38] Cyrus Cousins and Matteo Riondato. CaDET: Interpretable parametric conditional density estimation with decision trees and forests. Machine Learning, 108(8-9):1613–1634, 2019.
[39] Carsten Binnig, Benedetto Buratti, Yeounoh Chung, Cyrus Cousins, Tim Kraska, Zeyuan Shang, Eli Upfal, Robert Zeleznik, and Emanuel Zgraggen. Towards interactive curation & automatic tuning of ML pipelines. In Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning, 2018.
[40] Clayton Sanford, Cyrus Cousins, and Eli Upfal. Uniform convergence bounds for codec selection. arXiv:1812.07568, 2018.
[41] Cyrus Cousins and Eli Upfal. The k-nearest representatives classifier: A distance-based classifier with strong generalization bounds. In 4th International Conference on Data Science and Advanced Analytics, page 1–10. IEEE, 2017.
[42] Cyrus Cousins, Christopher M. Pietras, and Donna K. Slonim. Scalable FRaC variants: Anomaly detection for precision medicine. In International Parallel and Distributed Processing Symposium Workshops, page 253–262. IEEE, 2017.
[43] Carsten Binnig, Fuat Basik, Benedetto Buratti, Ugur Cetintemel, Yeounoh Chung, Andrew Crotty, Cyrus Cousins, Dylan Ebert, Philipp Eichmann, Alex Galakatos, Benjamin Hättasch, Amir Ilkhechi, Tim Kraska, Zeyuan Shang, Isabella Tromba, Arif Usta, Prasetya Utama, Eli Upfal, Linnan Wang, Nathaniel Weir, Robert Zeleznik, and Emanuel Zgraggen. Towards interactive data exploration. In Real-Time Business Intelligence and Analytics, page 177–190. Springer, 2017.


Teaching

Visiting Assistant Professor

Graduate Teaching Assistant



Curriculum Vitae (CV)