Matthew J. Holland

Research

Taking a broad view, my research interests concentrate around the theory and practice of machine learning in situations when we have limited knowledge of the quality of "feedback" available. In seeking strong yet transparent guarantees, one inherently runs into a deep and fundamental entanglement between the statistical and computational aspects of the learning problem. Put simply, I am interested in methodologies that enable us to reliably design efficient algorithms that work for reasons we understand.

Selected papers and related materials

Here is a list of representative writings, with links to related software, where applicable. I currently maintain the software that I have authored on GitHub (username: feedbackward). More information is available on my researchmap page; in English and Japanese.


2020

Making learning more transparent using conformalized performance prediction.
Matthew J. Holland.
Preprint.
[arXiv]

Learning with CVaR-based feedback under potentially heavy tails.
Matthew J. Holland and El Mehdi Haress.
Preprint.
[arXiv]

Improved scalability under heavy tails, without strong convexity.
Matthew J. Holland.
Preprint.
[arXiv]

Better scalability under potentially heavy-tailed gradients.
Matthew J. Holland.
Preprint.
[arXiv]


2018-2019

PAC-Bayes under potentially heavy tails.
Matthew J. Holland.
Presented at: NeurIPS 2019.
Proceedings: Advances in Neural Information Processing Systems 32, 2020.

[arXiv, Proceedings, bib]

Distribution-robust mean estimation via smoothed random perturbations.
Matthew J. Holland.
Preprint.
[arXiv, code, bib]

Better generalization with less data using robust gradient descent.
Matthew J. Holland and Kazushi Ikeda.
Presented at: ICML 2019. Proceedings: PMLR 97:2761-2770, 2019.
[PMLR, bib]

Robust gradient descent via back-propagation: A Chainer-based tutorial.
Matthew J. Holland.
[pdf, code]

Efficient learning with robust gradient descent.
Matthew J. Holland and Kazushi Ikeda.
Machine Learning, 108(8):1523-1560, 2019.
[arXiv, doi, bib]

Robust descent using smoothed multiplicative noise.
Matthew J. Holland.
Presented at: AISTATS 2019. Proceedings: PMLR 89:703-711, 2019.
[arXiv, PMLR, bib]

Classification using margin pursuit.
Matthew J. Holland.
Presented at: AISTATS 2019. Proceedings: PMLR 89:712-720, 2019.
[arXiv, code, demo, PMLR, bib]


2014-2017

Robust regression using biased objectives.
Matthew J. Holland and Kazushi Ikeda.
Journal: Machine Learning, 106(9):1643-1679, 2017.
Oral: ECML-PKDD 2017, Skopje, Macedonia.

[pdf, code, doi, bib]

Minimum proper loss estimators for parametric models.
Matthew J. Holland and Kazushi Ikeda.
IEEE Transactions on Signal Processing, 64(3):704-713, 2016.
[pdf, data, doi, bib]

Location robust estimation of predictive Weibull parameters in short-term wind speed forecasting.
Matthew J. Holland and Kazushi Ikeda.
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015.
Brisbane, Australia

[doi, bib]

Forecasting in wind energy applications with site-adaptive Weibull estimation.
Matthew J. Holland and Kazushi Ikeda.
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.
Florence, Italy

[doi, bib]