Search Results - "Seungcheol Baek"
-
1
Size-Aware Cache Management for Compressed Cache Architectures
Published in IEEE transactions on computers (01-08-2015)“…A practical way to increase the effective capacity of a microprocessor's cache, without physically increasing the cache size, is to employ data compression…”
Get full text
Journal Article -
2
An energy- and performance-aware DRAM cache architecture for hybrid DRAM/PCM main memory systems
Published in 2011 IEEE 29th International Conference on Computer Design (ICCD) (01-10-2011)“…The last few years have witnessed the emergence of a promising new memory technology. Phase-Change Memory (PCM) is increasingly viewed as an attractive…”
Get full text
Conference Proceeding -
3
ECM: Effective Capacity Maximizer for high-performance compressed caching
Published in 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA) (01-02-2013)“…Compressed Last-Level Cache (LLC) architectures have been proposed to enhance system performance by efficiently increasing the effective capacity of the cache,…”
Get full text
Conference Proceeding -
4
A Compression-Based Hybrid MLC/SLC Management Technique for Phase-Change Memory Systems
Published in 2012 IEEE Computer Society Annual Symposium on VLSI (01-08-2012)“…The storage density of PCM has been demonstrated to double through the employment of Multi-Level Cell (MLC) PCM arrays. However, this increase in capacity…”
Get full text
Conference Proceeding -
5
-
6
Characterization of Three Extracellular β-Glucosidases Produced by a Fungal Isolate Aspergillus sp. YDJ14 and Their Hydrolyzing Activity for a Flavone Glycoside
Published in Journal of microbiology and biotechnology (01-05-2018)“…A cellulolytic fungus, YDJ14, was isolated from compost and identified as an Aspergillus sp. strain. Three extracellular β-glucosidases, BGL-A1, BGL-A2, and…”
Get full text
Journal Article -
7
Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization
Published 09-11-2023“…Large Language Models (LLMs) are proficient in natural language processing tasks, but their deployment is often restricted by extensive parameter sizes and…”
Get full text
Journal Article -
8
IANUS: Integrated Accelerator based on NPU-PIM Unified Memory System
Published 19-10-2024“…ASPLOS 2024 Accelerating end-to-end inference of transformer-based large language models (LLMs) is a critical component of AI services in datacenters. However,…”
Get full text
Journal Article