Search Results - "Pruksachatkun, Yada"

  • Showing 1 - 11 results of 11
Refine Results
  1. 1

    BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation by Dhamala, Jwala, Sun, Tony, Kumar, Varun, Krishna, Satyapriya, Pruksachatkun, Yada, Chang, Kai-Wei, Gupta, Rahul

    Published 27-01-2021
    “…Recent advances in deep learning techniques have enabled machines to generate cohesive open-ended text when prompted with a sequence of words as context. While…”
    Get full text
    Journal Article
  2. 2

    Leveraging Explicit Procedural Instructions for Data-Efficient Action Prediction by White, Julia, Raghuvanshi, Arushi, Pruksachatkun, Yada

    Published 06-06-2023
    “…Task-oriented dialogues often require agents to enact complex, multi-step procedures in order to meet user requests. While large language models have found…”
    Get full text
    Journal Article
  3. 3

    On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations by Cao, Yang Trista, Pruksachatkun, Yada, Chang, Kai-Wei, Gupta, Rahul, Kumar, Varun, Dhamala, Jwala, Galstyan, Aram

    Published 25-03-2022
    “…ACL 2022 Multiple metrics have been introduced to measure fairness in various natural language processing tasks. These metrics can be roughly categorized into…”
    Get full text
    Journal Article
  4. 4

    Measuring Fairness of Text Classifiers via Prediction Sensitivity by Krishna, Satyapriya, Gupta, Rahul, Verma, Apurv, Dhamala, Jwala, Pruksachatkun, Yada, Chang, Kai-Wei

    Published 16-03-2022
    “…With the rapid growth in language processing applications, fairness has emerged as an important consideration in data-driven solutions. Although various…”
    Get full text
    Journal Article
  5. 5

    Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification by Pruksachatkun, Yada, Krishna, Satyapriya, Dhamala, Jwala, Gupta, Rahul, Chang, Kai-Wei

    Published 20-06-2021
    “…Existing bias mitigation methods to reduce disparities in model outcomes across cohorts have focused on data augmentation, debiasing model embeddings, or…”
    Get full text
    Journal Article
  6. 6

    CLIP: A Dataset for Extracting Action Items for Physicians from Hospital Discharge Notes by Mullenbach, James, Pruksachatkun, Yada, Adler, Sean, Seale, Jennifer, Swartz, Jordan, McKelvey, T. Greg, Dai, Hui, Yang, Yi, Sontag, David

    Published 04-06-2021
    “…Continuity of care is crucial to ensuring positive health outcomes for patients discharged from an inpatient hospital setting, and improved information sharing…”
    Get full text
    Journal Article
  7. 7

    Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal by Gupta, Umang, Dhamala, Jwala, Kumar, Varun, Verma, Apurv, Pruksachatkun, Yada, Krishna, Satyapriya, Gupta, Rahul, Chang, Kai-Wei, Steeg, Greg Ver, Galstyan, Aram

    Published 23-03-2022
    “…Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in…”
    Get full text
    Journal Article
  8. 8

    English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too by Phang, Jason, Calixto, Iacer, Htut, Phu Mon, Pruksachatkun, Yada, Liu, Haokun, Vania, Clara, Kann, Katharina, Bowman, Samuel R

    Published 26-05-2020
    “…Intermediate-task training---fine-tuning a pretrained model on an intermediate task before fine-tuning again on the target task---often improves model…”
    Get full text
    Journal Article
  9. 9

    jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models by Pruksachatkun, Yada, Yeres, Phil, Liu, Haokun, Phang, Jason, Htut, Phu Mon, Wang, Alex, Tenney, Ian, Bowman, Samuel R

    Published 04-03-2020
    “…We introduce jiant, an open source toolkit for conducting multitask and transfer learning experiments on English NLU tasks. jiant enables modular and…”
    Get full text
    Journal Article
  10. 10

    Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work? by Pruksachatkun, Yada, Phang, Jason, Liu, Haokun, Htut, Phu Mon, Zhang, Xiaoyi, Pang, Richard Yuanzhe, Vania, Clara, Kann, Katharina, Bowman, Samuel R

    Published 01-05-2020
    “…While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training…”
    Get full text
    Journal Article
  11. 11

    SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems by Wang, Alex, Pruksachatkun, Yada, Nangia, Nikita, Singh, Amanpreet, Michael, Julian, Hill, Felix, Levy, Omer, Bowman, Samuel R

    Published 01-05-2019
    “…In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language…”
    Get full text
    Journal Article