Quan Zhou

alt text 

Final-Year PhD Student,
Dyson School of Design Engineering,
Imperial College London, UK
E-mail: q.zhou22@imperial.ac.uk

I am expected to graduate in summer 2024.
I am looking for a postdoc position in optimization.

About me

I am a PhD student in Imperial College London, under the supervision of Dr. Jakub Mareček and Prof. Robert Shorten.
I obtained MS in Operational Research from University of Edinburgh and BE in Insurance from Hunan University.

My research focuses on machine learning fairness from the realm of polynomial optimization and optimal transport.

Research

Research interests

  • Optimization

    • Semidefinite Programming
    • Moment Problems
    • Non-Commutative Polynomial Optimization
    • Optimal Transport
  • Machine Learning Fairness

    • Fairness without Sensitive Membership
    • Fairness through Randomness

Recent publications

  1. Learning of linear dynamical systems as a non-commutative polynomial optimization problem. IEEE Transactions on Automatic Control, 2023.[pdf][code]
    Quan Zhou and Jakub Mareček.

  2. Fairness in forecasting of observations of linear dynamical systems. Journal of Artificial Intelligence Research, 2023.[pdf][code]
    Quan Zhou, Jakub Mareček, and Robert Shorten.

  3. Fairness in forecasting and learning linear dynamical systems. In Proceedings of the AAAI Conference on Artificial Intelligence, 2021.[pdf][code]
    Quan Zhou, Jakub Mareček, and Robert Shorten.

  4. Subgroup fairness in two-sided markets. Plos one, 2023.[pdf][code]
    Quan Zhou, Jakub Mareček, and Robert Shorten.

Under review

  1. Group-blind optimal transport to group parity and its constrained variants.[arXiv]
    Quan Zhou, Robert Shorten and Jakub Mareček.

  2. Joint problems in learning multiple dynamical systems.[arXiv]
    Mengjia Niu, Xiaoyu He, Petr Rysavy, Quan Zhou, and Jakub Mareček.

Full list of drafts in arXiv.

Projects

My PhD started in February 2020, just as the COVID-19 pandemic began. Over these years, fairness has evolved into a major subfield of machine learning. Looking ahead, fairness will become even more important in the coming years, following the political instability and economic crisis, triggered by the COVID-19 pandemic. Recovery from the pandemic is anticipated to be as uneven as its initial economic impacts. Disadvantaged populations and regions are likely to face an extended path to recover from the pandemic-induced losses of livelihoods, thereby exacerbating pre-existing inequalities and poverty.

I have focused on the following projects during my PhD.

  • Operator-Valued Polynomial Optimization to System Identification and Fair Forecasting, 02.2020-02.2023

  • Due to historical and systematic discrimination, real-world data are prevalently imbalanced, with certain groups outnumbering others. When training machine learning models (iteratively minimizing a loss function) without any fairness considerations, the models' outputs will be closer and more accurate for the majority group. This is simply because the majority group holds a larger share in terms of the value of the loss function.

    This project focus on applications of non-commutative polynomial optimization (NCPOP) (Pironio et al., 2010) and its sparsity-exploit variant (Wang et al., 2021), to address the problem of learning from imbalanced data. The fact that the variables in NCPOP are operator-valued, enable us to recover the system matrices of linear dynamic systems (or state-space models) without assumptions about the dimensions of hidden dynamics, when applied to system identification. It also accommodates polynomial-like fairness regularizers or shape constraints with global optimality guarantees, when applied to machine learning fairness. This project has achieved one paper in IEEE Transactions on Automatic Control, one paper in Journal of Artificial Intelligence Research and one paper in AAAI Conference on Artificial Intelligence.

  • Group-Blind Projections to Group Parity in Scenarios of Two Groups, 02.2023-10.2023

  • Consider two groups: a privileged one and an unprivileged one, where unprivileged students may outperform privileged students with similar scores, due to facing greater challenges. Relying solely on scores for admission decisions without considering students' backgrounds in this scenario could lead to strongly biased results against the unprivileged group, although the admission-decision algorithm does not take into account the sensitive attribute.

    This project aims to achieve equalized admission rates for two groups without knowledge of students' backgrounds (whether they are from the privileged group or not). It involves mapping datasets without knowing individual data point memberships, transformed into a convex program, and solved using generalized optimal transport. The project has a paper currently under review at the Journal of Machine Learning Research.

A brief cv.