Search Results - "Holland, Matthew J."

Refine Results
  1. 1

    Minimum Proper Loss Estimators for Parametric Models by Holland, Matthew J., Ikeda, Kazushi

    Published in IEEE transactions on signal processing (01-02-2016)
    “…In this paper, we propose a methodology for systematically deriving estimators minimizing proper loss functions defined on parametric statistical models, by…”
    Get full text
    Journal Article
  2. 2

    Learning with risks based on M-location by Holland, Matthew J.

    Published in Machine learning (01-12-2022)
    “…In this work, we study a new class of risks defined in terms of the location and deviation of the loss distribution, generalizing far beyond classical…”
    Get full text
    Journal Article
  3. 3

    Efficient learning with robust gradient descent by Holland, Matthew J., Ikeda, Kazushi

    Published in Machine learning (01-09-2019)
    “…Minimizing the empirical risk is a popular training strategy, but for learning tasks where the data may be noisy or heavy-tailed, one may require many…”
    Get full text
    Journal Article
  4. 4

    Location robust estimation of predictive Weibull parameters in short-term wind speed forecasting by Holland, Matthew J., Ikeda, Kazushi

    “…From turbine control systems at wind farms to extreme weather early-warning systems, short-term probabilistic wind speed forecasts are seeing widespread use in…”
    Get full text
    Conference Proceeding
  5. 5

    Drought reduces floral resources for pollinators by Phillips, Benjamin B., Shaw, Rosalind F., Holland, Matthew J., Fry, Ellen L., Bardgett, Richard D., Bullock, James M., Osborne, Juliet L.

    Published in Global change biology (01-07-2018)
    “…Climate change is predicted to result in increased occurrence and intensity of drought in many regions worldwide. By increasing plant physiological stress,…”
    Get full text
    Journal Article
  6. 6

    Robust regression using biased objectives by Holland, Matthew J., Ikeda, Kazushi

    Published in Machine learning (01-10-2017)
    “…For the regression task in a non-parametric setting, designing the objective function to be minimized by the learner is a critical task. In this paper we…”
    Get full text
    Journal Article
  7. 7

    A Survey of Learning Criteria Going Beyond the Usual Risk by Holland, Matthew J., Tanabe, Kazuki

    “…Virtually all machine learning tasks are characterized using some form of loss function, and “good performance” is typically stated in terms of a sufficiently…”
    Get full text
    Journal Article
  8. 8

    Criterion Collapse and Loss Distribution Control by Holland, Matthew J

    Published 15-02-2024
    “…In this work, we consider the notion of "criterion collapse," in which optimization of one metric implies optimality in another, with a particular focus on…”
    Get full text
    Journal Article
  9. 9

    Forecasting in wind energy applications with site-adaptive Weibull estimation by Holland, Matthew J., Ikeda, Kazushi

    “…From optimal supply decisions to anticipatory control systems, wind-based energy applications rely heavily upon accurate, local, short-term forecasts of future…”
    Get full text
    Conference Proceeding
  10. 10

    Robust variance-regularized risk minimization with concomitant scaling by Holland, Matthew J

    Published 27-01-2023
    “…Under losses which are potentially heavy-tailed, we consider the task of minimizing sums of the loss mean and standard deviation, without trying to accurately…”
    Get full text
    Journal Article
  11. 11

    Making Robust Generalizers Less Rigid with Soft Ascent-Descent by Holland, Matthew J, Hamada, Toma

    Published 07-08-2024
    “…While the traditional formulation of machine learning tasks is in terms of performance on average, in practice we are often interested in how well a trained…”
    Get full text
    Journal Article
  12. 12

    A Survey of Learning Criteria Going Beyond the Usual Risk by Holland, Matthew J, Tanabe, Kazuki

    Published 30-11-2023
    “…Journal of Artificial Intelligence Research, 78:781-821, 2023 Virtually all machine learning tasks are characterized using some form of loss function, and…”
    Get full text
    Journal Article
  13. 13

    Soft ascent-descent as a stable and flexible alternative to flooding by Holland, Matthew J, Nakatani, Kosuke

    Published 15-10-2023
    “…As a heuristic for improving test accuracy in classification, the "flooding" method proposed by Ishida et al. (2020) sets a threshold for the average surrogate…”
    Get full text
    Journal Article
  14. 14

    Flexible risk design using bi-directional dispersion by Holland, Matthew J

    Published 27-03-2022
    “…Many novel notions of "risk" (e.g., CVaR, tilted risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to…”
    Get full text
    Journal Article
  15. 15

    Robust learning with anytime-guaranteed feedback by Holland, Matthew J

    Published 24-05-2021
    “…Proceedings of the AAAI Conference on Artificial Intelligence, 36(6):6918-6925, 2022 Under data distributions which may be heavy-tailed, many stochastic…”
    Get full text
    Journal Article
  16. 16

    Learning with risks based on M-location by Holland, Matthew J

    Published 26-04-2021
    “…Machine Learning, 111:4679-4718, 2022 In this work, we study a new class of risks defined in terms of the location and deviation of the loss distribution,…”
    Get full text
    Journal Article
  17. 17

    Better scalability under potentially heavy-tailed feedback by Holland, Matthew J

    Published 14-12-2020
    “…We study scalable alternatives to robust gradient descent (RGD) techniques that can be used when the losses and/or gradients can be heavy-tailed, though this…”
    Get full text
    Journal Article
  18. 18

    Making learning more transparent using conformalized performance prediction by Holland, Matthew J

    Published 08-07-2020
    “…In this work, we study some novel applications of conformal inference techniques to the problem of providing machine learning procedures with more transparent,…”
    Get full text
    Journal Article
  19. 19

    Better scalability under potentially heavy-tailed gradients by Holland, Matthew J

    Published 01-06-2020
    “…We study a scalable alternative to robust gradient descent (RGD) techniques that can be used when the gradients can be heavy-tailed, though this will be…”
    Get full text
    Journal Article
  20. 20

    Improved scalability under heavy tails, without strong convexity by Holland, Matthew J

    Published 01-06-2020
    “…Real-world data is laden with outlying values. The challenge for machine learning is that the learner typically has no prior knowledge of whether the feedback…”
    Get full text
    Journal Article