Search Results - "Dahl, George E"

Refine Results
  1. 1

    Deep Neural Nets as a Method for Quantitative Structure–Activity Relationships by Ma, Junshui, Sheridan, Robert P, Liaw, Andy, Dahl, George E, Svetnik, Vladimir

    “…Neural networks were widely used for quantitative structure–activity relationships (QSAR) in the 1990s. Because of various practical issues (e.g., slow on…”
    Get full text
    Journal Article
  2. 2

    Prediction Errors of Molecular Machine Learning Models Lower than Hybrid DFT Error by Faber, Felix A, Hutchison, Luke, Huang, Bing, Gilmer, Justin, Schoenholz, Samuel S, Dahl, George E, Vinyals, Oriol, Kearnes, Steven, Riley, Patrick F, von Lilienfeld, O. Anatole

    Published in Journal of chemical theory and computation (14-11-2017)
    “…We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of 13 electronic…”
    Get full text
    Journal Article
  3. 3

    Machine learning guided aptamer refinement and discovery by Bashir, Ali, Yang, Qin, Wang, Jinpeng, Hoyer, Stephan, Chou, Wenchuan, McLean, Cory, Davis, Geoff, Gong, Qiang, Armstrong, Zan, Jang, Junghoon, Kang, Hui, Pawlosky, Annalisa, Scott, Alexander, Dahl, George E., Berndl, Marc, Dimon, Michelle, Ferguson, B. Scott

    Published in Nature communications (22-04-2021)
    “…Aptamers are single-stranded nucleic acid ligands that bind to target molecules with high affinity and specificity. They are typically discovered by searching…”
    Get full text
    Journal Article
  4. 4

    Acoustic Modeling Using Deep Belief Networks by Mohamed, Abdel-rahman, Dahl, George E., Hinton, Geoffrey

    “…Gaussian mixture models are currently the dominant technique for modeling the emission distribution of hidden Markov models for speech recognition. We show…”
    Get full text
    Journal Article
  5. 5

    Artificial Intelligence-Based Breast Cancer Nodal Metastasis Detection: Insights Into the Black Box for Pathologists by Liu, Yun, Kohlberger, Timo, Norouzi, Mohammad, Dahl, George E, Smith, Jenny L, Mohtashamian, Arash, Olson, Niels, Peng, Lily H, Hipp, Jason D, Stumpe, Martin C

    “…Nodal metastasis of a primary tumor influences therapy decisions for a variety of cancers. Histologic identification of tumor cells in lymph nodes can be…”
    Get full text
    Journal Article
  6. 6

    Large-scale malware classification using random projections and neural networks by Dahl, George E., Stokes, Jack W., Li Deng, Dong Yu

    “…Automatically generated malware is a significant problem for computer users. Analysts are able to manually investigate a small number of unknown files, but the…”
    Get full text
    Conference Proceeding
  7. 7
  8. 8

    Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition by Dahl, G. E., Dong Yu, Li Deng, Acero, A.

    “…We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for…”
    Get full text
    Journal Article
  9. 9

    Improving deep neural networks for LVCSR using rectified linear units and dropout by Dahl, George E., Sainath, Tara N., Hinton, Geoffrey E.

    “…Recently, pre-trained deep neural networks (DNNs) have outperformed traditional acoustic models based on Gaussian mixture models (GMMs) on a variety of large…”
    Get full text
    Conference Proceeding
  10. 10

    Large vocabulary continuous speech recognition with context-dependent DBN-HMMS by Dahl, George E., Dong Yu, Li Deng, Acero, Alex

    “…The context-independent deep belief network (DBN) hidden Markov model (HMM) hybrid architecture has recently achieved promising results for phone recognition…”
    Get full text
    Conference Proceeding
  11. 11
  12. 12
  13. 13

    Improvements to Deep Convolutional Neural Networks for LVCSR by Sainath, Tara N., Kingsbury, Brian, Mohamed, Abdel-rahman, Dahl, George E., Saon, George, Soltau, Hagen, Beran, Tomas, Aravkin, Aleksandr Y., Ramabhadran, Bhuvana

    “…Deep Convolutional Neural Networks (CNNs) are more powerful than Deep Neural Networks (DNN), as they are able to better reduce spectral variation in the input…”
    Get full text
    Conference Proceeding
  14. 14

    What Will it Take to Fix Benchmarking in Natural Language Understanding? by Bowman, Samuel R, Dahl, George E

    Published 05-04-2021
    “…Evaluation for many natural language understanding (NLU) tasks is broken: Unreliable and biased systems score so highly on standard benchmarks that there is…”
    Get full text
    Journal Article
  15. 15

    Predicting the utility of search spaces for black-box optimization: a simple, budget-aware approach by Ariafar, Setareh, Gilmer, Justin, Nado, Zachary, Snoek, Jasper, Jenatton, Rodolphe, Dahl, George E

    Published 15-12-2021
    “…Black box optimization requires specifying a search space to explore for solutions, e.g. a d-dimensional compact space, and this choice is critical for getting…”
    Get full text
    Journal Article
  16. 16

    Pre-training helps Bayesian optimization too by Wang, Zi, Dahl, George E, Swersky, Kevin, Lee, Chansoo, Mariet, Zelda, Nado, Zachary, Gilmer, Justin, Snoek, Jasper, Ghahramani, Zoubin

    Published 07-07-2022
    “…Bayesian optimization (BO) has become a popular strategy for global optimization of many expensive real-world functions. Contrary to a common belief that BO is…”
    Get full text
    Journal Article
  17. 17

    Faster Neural Network Training with Data Echoing by Choi, Dami, Passos, Alexandre, Shallue, Christopher J, Dahl, George E

    Published 11-07-2019
    “…In the twilight of Moore's law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of…”
    Get full text
    Journal Article
  18. 18

    A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes by Nado, Zachary, Gilmer, Justin M, Shallue, Christopher J, Anil, Rohan, Dahl, George E

    Published 12-02-2021
    “…Recently the LARS and LAMB optimizers have been proposed for training neural networks faster using large batch sizes. LARS and LAMB add layer-wise…”
    Get full text
    Journal Article
  19. 19

    Pre-trained Gaussian Processes for Bayesian Optimization by Wang, Zi, Dahl, George E, Swersky, Kevin, Lee, Chansoo, Nado, Zachary, Gilmer, Justin, Snoek, Jasper, Ghahramani, Zoubin

    Published 16-09-2021
    “…Journal of Machine Learning Research, 25(212):1-83, 2024. URL http://jmlr.org/papers/v25/23-0269.html Bayesian optimization (BO) has become a popular strategy…”
    Get full text
    Journal Article
  20. 20

    Adaptive Gradient Methods at the Edge of Stability by Cohen, Jeremy M, Ghorbani, Behrooz, Krishnan, Shankar, Agarwal, Naman, Medapati, Sourabh, Badura, Michal, Suo, Daniel, Cardoze, David, Nado, Zachary, Dahl, George E, Gilmer, Justin

    Published 29-07-2022
    “…Very little is known about the training dynamics of adaptive gradient methods like Adam in deep learning. In this paper, we shed light on the behavior of these…”
    Get full text
    Journal Article