Search Results - "Holland, Matthew J."
-
1
Minimum Proper Loss Estimators for Parametric Models
Published in IEEE transactions on signal processing (01-02-2016)“…In this paper, we propose a methodology for systematically deriving estimators minimizing proper loss functions defined on parametric statistical models, by…”
Get full text
Journal Article -
2
Learning with risks based on M-location
Published in Machine learning (01-12-2022)“…In this work, we study a new class of risks defined in terms of the location and deviation of the loss distribution, generalizing far beyond classical…”
Get full text
Journal Article -
3
Efficient learning with robust gradient descent
Published in Machine learning (01-09-2019)“…Minimizing the empirical risk is a popular training strategy, but for learning tasks where the data may be noisy or heavy-tailed, one may require many…”
Get full text
Journal Article -
4
Location robust estimation of predictive Weibull parameters in short-term wind speed forecasting
Published in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-04-2015)“…From turbine control systems at wind farms to extreme weather early-warning systems, short-term probabilistic wind speed forecasts are seeing widespread use in…”
Get full text
Conference Proceeding -
5
Drought reduces floral resources for pollinators
Published in Global change biology (01-07-2018)“…Climate change is predicted to result in increased occurrence and intensity of drought in many regions worldwide. By increasing plant physiological stress,…”
Get full text
Journal Article -
6
Robust regression using biased objectives
Published in Machine learning (01-10-2017)“…For the regression task in a non-parametric setting, designing the objective function to be minimized by the learner is a critical task. In this paper we…”
Get full text
Journal Article -
7
A Survey of Learning Criteria Going Beyond the Usual Risk
Published in The Journal of artificial intelligence research (01-01-2023)“…Virtually all machine learning tasks are characterized using some form of loss function, and “good performance” is typically stated in terms of a sufficiently…”
Get full text
Journal Article -
8
Criterion Collapse and Loss Distribution Control
Published 15-02-2024“…In this work, we consider the notion of "criterion collapse," in which optimization of one metric implies optimality in another, with a particular focus on…”
Get full text
Journal Article -
9
Forecasting in wind energy applications with site-adaptive Weibull estimation
Published in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-05-2014)“…From optimal supply decisions to anticipatory control systems, wind-based energy applications rely heavily upon accurate, local, short-term forecasts of future…”
Get full text
Conference Proceeding -
10
Robust variance-regularized risk minimization with concomitant scaling
Published 27-01-2023“…Under losses which are potentially heavy-tailed, we consider the task of minimizing sums of the loss mean and standard deviation, without trying to accurately…”
Get full text
Journal Article -
11
Making Robust Generalizers Less Rigid with Soft Ascent-Descent
Published 07-08-2024“…While the traditional formulation of machine learning tasks is in terms of performance on average, in practice we are often interested in how well a trained…”
Get full text
Journal Article -
12
A Survey of Learning Criteria Going Beyond the Usual Risk
Published 30-11-2023“…Journal of Artificial Intelligence Research, 78:781-821, 2023 Virtually all machine learning tasks are characterized using some form of loss function, and…”
Get full text
Journal Article -
13
Soft ascent-descent as a stable and flexible alternative to flooding
Published 15-10-2023“…As a heuristic for improving test accuracy in classification, the "flooding" method proposed by Ishida et al. (2020) sets a threshold for the average surrogate…”
Get full text
Journal Article -
14
Flexible risk design using bi-directional dispersion
Published 27-03-2022“…Many novel notions of "risk" (e.g., CVaR, tilted risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to…”
Get full text
Journal Article -
15
Robust learning with anytime-guaranteed feedback
Published 24-05-2021“…Proceedings of the AAAI Conference on Artificial Intelligence, 36(6):6918-6925, 2022 Under data distributions which may be heavy-tailed, many stochastic…”
Get full text
Journal Article -
16
Learning with risks based on M-location
Published 26-04-2021“…Machine Learning, 111:4679-4718, 2022 In this work, we study a new class of risks defined in terms of the location and deviation of the loss distribution,…”
Get full text
Journal Article -
17
Better scalability under potentially heavy-tailed feedback
Published 14-12-2020“…We study scalable alternatives to robust gradient descent (RGD) techniques that can be used when the losses and/or gradients can be heavy-tailed, though this…”
Get full text
Journal Article -
18
Making learning more transparent using conformalized performance prediction
Published 08-07-2020“…In this work, we study some novel applications of conformal inference techniques to the problem of providing machine learning procedures with more transparent,…”
Get full text
Journal Article -
19
Better scalability under potentially heavy-tailed gradients
Published 01-06-2020“…We study a scalable alternative to robust gradient descent (RGD) techniques that can be used when the gradients can be heavy-tailed, though this will be…”
Get full text
Journal Article -
20
Improved scalability under heavy tails, without strong convexity
Published 01-06-2020“…Real-world data is laden with outlying values. The challenge for machine learning is that the learner typically has no prior knowledge of whether the feedback…”
Get full text
Journal Article