Search Results - "Jacobs, Sam"

Refine Results
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6

    Adaptive neighbor connection for PRMs: A natural fit for heterogeneous environments and parallelism by Ekenna, Chinwe, Jacobs, Sam Ade, Thomas, Shawna, Amato, Nancy M.

    “…Probabilistic Roadmap Methods (PRMs) are widely used motion planning methods that sample robot configurations (nodes) and connect them to form a graph…”
    Get full text
    Conference Proceeding
  7. 7

    The anatomy of a distributed motion planning roadmap by Jacobs, Sam Ade, Amato, Nancy M.

    “…In this paper, we evaluate and compare the quality and structure of roadmaps constructed from parallelizing sampling-based motion planning algorithms against…”
    Get full text
    Conference Proceeding
  8. 8

    Parallelizing Training of Deep Generative Models on Massive Scientific Datasets by Jacobs, Sam Ade, Gaffney, Jim, Benson, Tom, Robinson, Peter, Peterson, Luc, Spears, Brian, Van Essen, Brian, Hysom, David, Yeom, Jae-Seung, Moon, Tim, Anirudh, Rushil, Thiagaranjan, Jayaraman J., Liu, Shusen, Bremer, Peer-Timo

    “…Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to…”
    Get full text
    Conference Proceeding
  9. 9

    On the weakly C-H···π hydrogen bonded complexes of sevoflurane and benzene by Dom, Johan J J, van der Veken, Benjamin J, Michielsen, Bart, Jacobs, Sam, Xue, Zhifeng, Hesse, Susanne, Loritz, Hans-Martin, Suhm, Martin A, Herrebout, Wouter A

    Published in Physical chemistry chemical physics : PCCP (21-08-2011)
    “…A vibrational assignment of the anaesthetic sevoflurane, (CF(3))(2)CHOCH(2)F, is proposed and its interaction with the aromatic model compound benzene is…”
    Get more information
    Journal Article
  10. 10

    Blind RRT: A probabilistically complete distributed RRT by Rodriguez, Cesar, Denny, Jory, Jacobs, Sam Ade, Thomas, Shawna, Amato, Nancy M.

    “…Rapidly-Exploring Random Trees (RRTs) have been successful at finding feasible solutions for many types of problems. With motion planning becoming more…”
    Get full text
    Conference Proceeding
  11. 11

    Learning Interpretable Models Through Multi-Objective Neural Architecture Search by Carmichael, Zachariah, Moon, Tim, Jacobs, Sam Ade

    Published 16-12-2021
    “…Monumental advances in deep learning have led to unprecedented achievements across various domains. While the performance of deep neural networks is…”
    Get full text
    Journal Article
  12. 12

    Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer by Yao, Jinghan, Jacobs, Sam Ade, Tanaka, Masahiro, Ruwase, Olatunji, Shafi, Aamir, Subramoni, Hari, Panda, Dhabaleswar K

    Published 29-08-2024
    “…Large Language Models (LLMs) with long context capabilities are integral to complex tasks in natural language processing and computational biology, such as…”
    Get full text
    Journal Article
  13. 13

    Universal Checkpointing: Efficient and Flexible Checkpointing for Large Scale Distributed Training by Lian, Xinyu, Jacobs, Sam Ade, Kurilenko, Lev, Tanaka, Masahiro, Bekman, Stas, Ruwase, Olatunji, Zhang, Minjia

    Published 26-06-2024
    “…Existing checkpointing approaches seem ill-suited for distributed training even though hardware limitations make model parallelism, i.e., sharding model state…”
    Get full text
    Journal Article
  14. 14

    Large-Scale Industrial Alarm Reduction and Critical Events Mining Using Graph Analytics on Spark by Jacobs, Sam Ade, Dagnino, Aldo

    “…In current industrial practice, thousands of industrial alarms generating millions of alarm events, are built into digital control systems typically found in…”
    Get full text
    Conference Proceeding
  15. 15

    DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models by Jacobs, Sam Ade, Tanaka, Masahiro, Zhang, Chengming, Zhang, Minjia, Song, Shuaiwen Leon, Rajbhandari, Samyam, He, Yuxiong

    Published 25-09-2023
    “…Computation in a typical Transformer-based large language model (LLM) can be characterized by batch size, hidden dimension, number of layers, and sequence…”
    Get full text
    Journal Article
  16. 16

    System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models by Jacobs, Sam Ade, Tanaka, Masahiro, Zhang, Chengming, Zhang, Minjia, Aminabadi, Reza Yazdani, Song, Shuaiwen Leon, Rajbhandari, Samyam, He, Yuxiong

    “…Long sequences are ubiquitous in NLP tasks such as document summarization, machine translation, and dialogue modeling [1]-[9]. Traditional approaches to…”
    Get full text
    Conference Proceeding
  17. 17

    Local randomization in neighbor selection improves PRM roadmap quality by McMahon, T., Jacobs, S., Boyd, B., Tapia, L., Amato, N. M.

    “…Probabilistic Roadmap Methods (PRMs) are one of the most used classes of motion planning methods. These sampling-based methods generate robot configurations…”
    Get full text
    Conference Proceeding
  18. 18

    ZeRO++: Extremely Efficient Collective Communication for Giant Model Training by Wang, Guanhua, Qin, Heyang, Jacobs, Sam Ade, Holmes, Connor, Rajbhandari, Samyam, Ruwase, Olatunji, Yan, Feng, Yang, Lei, He, Yuxiong

    Published 16-06-2023
    “…Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of large language models on massive GPUs clusters due to its ease of use, efficiency, and…”
    Get full text
    Journal Article
  19. 19

    Parallelizing Graph Neural Networks via Matrix Compaction for Edge-Conditioned Networks by Zaman, Shehtab, Moon, Tim, Benson, Tom, Jacobs, Sam Ade, Chiu, Kenneth, Van Essen, Brian

    “…Graph neural networks (GNNs) are a powerful approach for machine learning on graph datasets. Such datasets often consist of millions of modestly-sized graphs,…”
    Get full text
    Conference Proceeding
  20. 20

    SUPER: SUb-Graph Parallelism for TransformERs by Jain, Arpan, Moon, Tim, Benson, Tom, Subramoni, Hari, Jacobs, Sam Ade, Panda, Dhabaleswar K, Essen, Brian Van

    “…Transformer models have revolutionized the field of Natural Language Processing (NLP) and they achieve state-of-the-art performance in applications like…”
    Get full text
    Conference Proceeding