Search Results - "Ko, Jongwoo"
-
1
Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs
Published in Ophthalmology science (Online) (01-06-2022)“…To develop and validate an automated deep learning (DL)-based artificial intelligence (AI) platform for diagnosing and grading cataracts using slit-lamp and…”
Get full text
Journal Article -
2
Deep Gaussian process models for integrating multifidelity experiments with nonstationary relationships
Published in IIE transactions (03-07-2022)“…The problem of integrating multifidelity data has been studied extensively, due to integrated analyses being able to provide better results than separately…”
Get full text
Journal Article -
3
Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study
Published in Ophthalmology science (Online) (01-06-2022)“…PurposeTo develop and validate an automated deep learning (DL)-based artificial intelligence (AI) platform for diagnosing and grading cataracts using slit-lamp…”
Get full text
Journal Article -
4
Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models
Published 27-11-2023“…Vision-language models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image…”
Get full text
Journal Article -
5
CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition
Published 10-02-2023“…Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on…”
Get full text
Journal Article -
6
Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks
Published 18-10-2022“…Translation has played a crucial role in improving the performance on multilingual tasks: (1) to generate the target language data from the source language…”
Get full text
Journal Article -
7
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise
Published 14-06-2022“…As deep neural networks can easily overfit noisy labels, robust training in the presence of noisy labels is becoming an important challenge in modern deep…”
Get full text
Journal Article -
8
DistiLLM: Towards Streamlined Distillation for Large Language Models
Published 06-02-2024“…Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while…”
Get full text
Journal Article -
9
Fine tuning Pre trained Models for Robustness Under Noisy Labels
Published 24-10-2023“…The presence of noisy labels in a training dataset can significantly impact the performance of machine learning models. To tackle this issue, researchers have…”
Get full text
Journal Article -
10
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding
Published 09-10-2023“…To tackle the high inference latency exhibited by autoregressive language models, previous studies have proposed an early-exiting framework that allocates…”
Get full text
Journal Article -
11
SeRA: Self-Reviewing and Alignment of Large Language Models using Implicit Reward Margins
Published 12-10-2024“…Direct alignment algorithms (DAAs), such as direct preference optimization (DPO), have become popular alternatives for Reinforcement Learning from Human…”
Get full text
Journal Article -
12
Beyond correlation: The impact of human uncertainty in measuring the effectiveness of automatic evaluation and LLM-as-a-judge
Published 02-10-2024“…The effectiveness of automatic evaluation of generative models is typically measured by comparing it to human evaluation using correlation metrics. However,…”
Get full text
Journal Article -
13
HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning
Published 11-09-2024“…Structured pruning is one of the most popular approaches to effectively compress the heavy deep neural networks (DNNs) into compact sub-networks while…”
Get full text
Journal Article -
14
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models
Published 16-10-2023“…Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as…”
Get full text
Journal Article -
15
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Published 02-02-2023“…Knowledge distillation (KD) is a highly promising method for mitigating the computational problems of pre-trained language models (PLMs). Among various KD…”
Get full text
Journal Article -
16
Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network
Published 29-06-2021“…Contrastive loss has significantly improved performance in supervised classification tasks by using a multi-viewed framework that leverages augmentation and…”
Get full text
Journal Article -
17
FINE Samples for Learning with Noisy Labels
Published 23-02-2021“…Modern deep neural networks (DNNs) become frail when the datasets contain noisy (incorrect) class labels. Robust techniques in the presence of noisy labels can…”
Get full text
Journal Article