Search Results - "Ko, Jongwoo"

  • Showing 1 - 17 results of 17
Refine Results
  1. 1

    Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs by Son, Ki Young, Ko, Jongwoo, Kim, Eunseok, Lee, Si Young, Kim, Min-Ji, Han, Jisang, Shin, Eunhae, Chung, Tae-Young, Lim, Dong Hui

    Published in Ophthalmology science (Online) (01-06-2022)
    “…To develop and validate an automated deep learning (DL)-based artificial intelligence (AI) platform for diagnosing and grading cataracts using slit-lamp and…”
    Get full text
    Journal Article
  2. 2

    Deep Gaussian process models for integrating multifidelity experiments with nonstationary relationships by Ko, Jongwoo, Kim, Heeyoung

    Published in IIE transactions (03-07-2022)
    “…The problem of integrating multifidelity data has been studied extensively, due to integrated analyses being able to provide better results than separately…”
    Get full text
    Journal Article
  3. 3

    Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study by Son, Ki Young, Ko, Jongwoo, Kim, Eunseok, Lee, Si Young, Kim, Min-Ji, Han, Jisang, Shin, Eunhae, Chung, Tae-Young, Lim, Dong Hui

    Published in Ophthalmology science (Online) (01-06-2022)
    “…PurposeTo develop and validate an automated deep learning (DL)-based artificial intelligence (AI) platform for diagnosing and grading cataracts using slit-lamp…”
    Get full text
    Journal Article
  4. 4

    Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models by Yang, Yongjin, Ko, Jongwoo, Yun, Se-Young

    Published 27-11-2023
    “…Vision-language models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image…”
    Get full text
    Journal Article
  5. 5

    CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition by Ahn, Sumyeong, Ko, Jongwoo, Yun, Se-Young

    Published 10-02-2023
    “…Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on…”
    Get full text
    Journal Article
  6. 6

    Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks by Oh, Jaehoon, Ko, Jongwoo, Yun, Se-Young

    Published 18-10-2022
    “…Translation has played a crucial role in improving the performance on multilingual tasks: (1) to generate the target language data from the source language…”
    Get full text
    Journal Article
  7. 7

    A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise by Ko, Jongwoo, Yi, Bongsoo, Yun, Se-Young

    Published 14-06-2022
    “…As deep neural networks can easily overfit noisy labels, robust training in the presence of noisy labels is becoming an important challenge in modern deep…”
    Get full text
    Journal Article
  8. 8

    DistiLLM: Towards Streamlined Distillation for Large Language Models by Ko, Jongwoo, Kim, Sungnyun, Chen, Tianyi, Yun, Se-Young

    Published 06-02-2024
    “…Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while…”
    Get full text
    Journal Article
  9. 9

    Fine tuning Pre trained Models for Robustness Under Noisy Labels by Ahn, Sumyeong, Kim, Sihyeon, Ko, Jongwoo, Yun, Se-Young

    Published 24-10-2023
    “…The presence of noisy labels in a training dataset can significantly impact the performance of machine learning models. To tackle this issue, researchers have…”
    Get full text
    Journal Article
  10. 10

    Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding by Bae, Sangmin, Ko, Jongwoo, Song, Hwanjun, Yun, Se-Young

    Published 09-10-2023
    “…To tackle the high inference latency exhibited by autoregressive language models, previous studies have proposed an early-exiting framework that allocates…”
    Get full text
    Journal Article
  11. 11

    SeRA: Self-Reviewing and Alignment of Large Language Models using Implicit Reward Margins by Ko, Jongwoo, Dingliwal, Saket, Ganesh, Bhavana, Sengupta, Sailik, Bodapati, Sravan, Galstyan, Aram

    Published 12-10-2024
    “…Direct alignment algorithms (DAAs), such as direct preference optimization (DPO), have become popular alternatives for Reinforcement Learning from Human…”
    Get full text
    Journal Article
  12. 12

    Beyond correlation: The impact of human uncertainty in measuring the effectiveness of automatic evaluation and LLM-as-a-judge by Elangovan, Aparna, Ko, Jongwoo, Xu, Lei, Elyasi, Mahsa, Liu, Ling, Bodapati, Sravan, Roth, Dan

    Published 02-10-2024
    “…The effectiveness of automatic evaluation of generative models is typically measured by comparing it to human evaluation using correlation metrics. However,…”
    Get full text
    Journal Article
  13. 13

    HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning by Chen, Tianyi, Qu, Xiaoyi, Aponte, David, Banbury, Colby, Ko, Jongwoo, Ding, Tianyu, Ma, Yong, Lyapunov, Vladimir, Zharkov, Ilya, Liang, Luming

    Published 11-09-2024
    “…Structured pruning is one of the most popular approaches to effectively compress the heavy deep neural networks (DNNs) into compact sub-networks while…”
    Get full text
    Journal Article
  14. 14

    NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models by Ko, Jongwoo, Park, Seungjoon, Kim, Yujin, Ahn, Sumyeong, Chang, Du-Seong, Ahn, Euijai, Yun, Se-Young

    Published 16-10-2023
    “…Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as…”
    Get full text
    Journal Article
  15. 15

    Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective by Ko, Jongwoo, Park, Seungjoon, Jeong, Minchan, Hong, Sukjin, Ahn, Euijai, Chang, Du-Seong, Yun, Se-Young

    Published 02-02-2023
    “…Knowledge distillation (KD) is a highly promising method for mitigating the computational problems of pre-trained language models (PLMs). Among various KD…”
    Get full text
    Journal Article
  16. 16

    Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network by Bae, Sangmin, Kim, Sungnyun, Ko, Jongwoo, Lee, Gihun, Noh, Seungjong, Yun, Se-Young

    Published 29-06-2021
    “…Contrastive loss has significantly improved performance in supervised classification tasks by using a multi-viewed framework that leverages augmentation and…”
    Get full text
    Journal Article
  17. 17

    FINE Samples for Learning with Noisy Labels by Kim, Taehyeon, Ko, Jongwoo, Cho, Sangwook, Choi, Jinhwan, Yun, Se-Young

    Published 23-02-2021
    “…Modern deep neural networks (DNNs) become frail when the datasets contain noisy (incorrect) class labels. Robust techniques in the presence of noisy labels can…”
    Get full text
    Journal Article