Matthew J. Holland

Introduction to our JST PRESTO project (2021/10-2025/03)

Background

One of the major funding bodies for scientific research in Japan is JST, the Japan Science and Technology Agency (en/ja). JST provides a wide variety of grants both to individual researchers and teams of researchers, but the flagship grant for individuals is called PRESTO (Sakigake in Japanese; en/ja). My application to PRESTO was accepted within the research area of "Trustworthy AI" (en/ja), and it runs from October 2021 to March 2025, with a total budget of 40 million yen.

On this page, I will give an overview of the goals and key ideas underlying my initial proposal, as well as provide a summary of the research papers, software, and presentations I make which are closely related to this project.

Overview of key concepts

The title of my project is "Machine learning with guarantees under diverse risk measures" and the most important underlying idea is that the current machine learning methodology rooted in performance on average needs a principled re-evaluation.

Put simply, the formal definition of "success" in most machine learning tasks can be formally expressed as minimizing the expected value (i.e., the average) of a random loss computed using some kind of loss function. Here the randomness is typically assumed to be over the random draw of a new data point at test time (i.e., after "training" is complete). This approach is perfectly natural, but in designing and evaluating learning systems (e.g., human workflows supported by machine learning software, automated systems running such software), the emphasis on the average leaves out other important properties of the random loss distribution (e.g., dispersion, heaviness of tails, symmetry, etc.). In the title of my project, I use the term "diverse risk measures" to emphasize that I want to develop new algorithms, theory, and methodologies for machine learning tasks characterized by the optimization of a wider variety of properties of the test loss distribution, including but not limited to the expected loss.

Some background reading

In the following paper, I discuss some of the key ideas underlying this project with a bit more formal notation, and also complement this with a brief historical review of statistical learning and the role played by the expected loss.

Designing off-sample performance metrics
Matthew J. Holland
Preprint.
[pdf, abstract]

Several works done by myself and colleagues can be considered precursors to the current PRESTO project.

Learning with risks based on M-location
Matthew J. Holland
Journal: Machine Learning, to appear.
Oral: ECML-PKDD 2022, Grenoble, France (to appear).

[pdf, abstract, code]

Spectral risk-based learning using unbounded losses
Matthew J. Holland and El Mehdi Haress
Presented at AISTATS 2022.
Proceedings of Machine Learning Research 151:1871-1886, 2022.

[pdf, proceedings, code]

Making learning more transparent using conformalized performance prediction
Matthew J. Holland
Presented at ICML 2021, Workshop on Distribution-Free Uncertainty Quantification.
[pdf, abstract]

Learning with risk-averse feedback under potentially heavy tails
Matthew J. Holland and El Mehdi Haress
Presented at AISTATS 2021.
Proceedings of Machine Learning Research 130:892-900, 2021.

[pdf, proceedings, code]

With these works as technical and conceptual context, the following section summarizes key points regarding the progress made since starting this project.

New work since starting this project

Here I will attempt to introduce the new papers and presentations related to this project in a logical and approximately chronological fashion. The first substantive new results build upon the "M-location" notion considered in our previous work (the MLJ/ECML-PKDD2022 paper cited above), making a significant conceptual and technical expansion by placing the notion of "dispersion" at the forefront when designing off-sample generalization metrics (i.e., risk functions). The most up-to-date version of this work (both paper and software) is as below.

Risk regularization through bidirectional dispersion
Matthew J. Holland
Preprint.
[pdf, abstract, code]

In the first few months of the 2022 academic year (starting in April), in the vein of sharing the key ideas and initial results of this research work with a diverse audience, I gave several talks, both at universities and conferences here in Japan. In particular, the following two oral presentations were the first in-person presentations I have made since the start of the COVID outbreak.

In addition to rigorous theoretical and experimental analysis aimed at experts in machine learning, I have also been making some effort to author an "explainer" article which breaks down the key concepts underlying this research project into a form that is congenial to a more diverse audience, inspired in part by the ICLR Blog Track introduced in 2022, currently stored in the following public GitHub repository.

offgen: A visual "explainer" for off-sample generalization metrics
Matthew J. Holland
Public GitHub Repository.
[code]

As additional progress is made, this article will be updated and expanded.