Search Results - "Irie, Kazuki"
-
1
Furry protein suppresses nuclear localization of yes-associated protein (YAP) by activating NDR kinase and binding to YAP
Published in The Journal of biological chemistry (06-03-2020)“…The Hippo signaling pathway suppresses cell proliferation and tumorigenesis. In the canonical Hippo pathway, large tumor suppressor kinases 1/2 (LATS1/2)…”
Get full text
Journal Article -
2
A Comparison of Transformer and LSTM Encoder Decoder Models for ASR
Published in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) (01-12-2019)“…We present competitive results using a Transformer encoder-decoder-attention model for end-to-end speech recognition needing less training time compared to a…”
Get full text
Conference Proceeding -
3
The Rwth Asr System for Ted-Lium Release 2: Improving Hybrid Hmm With Specaugment
Published in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-05-2020)“…We present a complete training pipeline to build a state-of-the-art hybrid HMM-based ASR system on the 2nd release of the TED-LIUM corpus. Data augmentation…”
Get full text
Conference Proceeding -
4
RADMM: Recurrent Adaptive Mixture Model with Applications to Domain Robust Language Modeling
Published in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-04-2018)“…We present a new architecture and a training strategy for an adaptive mixture of experts with applications to domain robust language modeling. The proposed…”
Get full text
Conference Proceeding -
5
How Much Self-Attention Do We Need? Trading Attention for Feed-Forward Layers
Published in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-05-2020)“…We propose simple architectural modifications in the standard Transformer with the goal to reduce its total state size (defined as the number of self-attention…”
Get full text
Conference Proceeding -
6
Prediction of LSTM-RNN Full Context States as a Subtask for N-Gram Feedforward Language Models
Published in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-04-2018)“…Long short-term memory (LSTM) recurrent neural network language models compress the full context of variable lengths into a fixed size vector. In this work, we…”
Get full text
Conference Proceeding -
7
Domain Robust, Fast, and Compact Neural Language Models
Published in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-05-2020)“…Despite advances in neural language modeling, obtaining a good model on a large scale multi-domain dataset still remains a difficult task. We propose training…”
Get full text
Conference Proceeding -
8
Accelerating Neural Self-Improvement via Bootstrapping
Published 02-05-2023“…Few-shot learning with sequence-processing neural networks (NNs) has recently attracted a new wave of attention in the context of large language models. In the…”
Get full text
Journal Article -
9
Neural networks that overcome classic challenges through practice
Published 14-10-2024“…Since the earliest proposals for neural network models of the mind and brain, critics have pointed out key weaknesses in these models compared to human…”
Get full text
Journal Article -
10
Learning to Control Rapidly Changing Synaptic Connections: An Alternative Type of Memory in Sequence Processing Artificial Neural Networks
Published 17-11-2022“…Short-term memory in standard, general-purpose, sequence-processing recurrent neural networks (RNNs) is stored as activations of nodes or "neurons."…”
Get full text
Journal Article -
11
Images as Weight Matrices: Sequential Image Generation Through Synaptic Learning Rules
Published 07-10-2022“…Work on fast weight programmers has demonstrated the effectiveness of key/value outer product-based learning rules for sequentially generating a weight matrix…”
Get full text
Journal Article -
12
Training Language Models for Long-Span Cross-Sentence Evaluation
Published in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) (01-12-2019)“…While recurrent neural networks can motivate cross-sentence language modeling and its application to automatic speech recognition (ASR), corresponding…”
Get full text
Conference Proceeding -
13
Investigation on log-linear interpolation of multi-domain neural network language model
Published in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-03-2016)“…Inspired by the success of multi-task training in acoustic modeling, this paper investigates a new architecture for a multi-domain neural network based…”
Get full text
Conference Proceeding Journal Article -
14
Training and Generating Neural Networks in Compressed Weight Space
Published 31-12-2021“…The inputs and/or outputs of some neural nets are weight matrices of other neural nets. Indirect encodings or end-to-end compression of weight matrices could…”
Get full text
Journal Article -
15
Investigations on byte-level convolutional neural networks for language modeling in low resource speech recognition
Published in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (01-03-2017)“…In this paper, we present an investigation on technical details of the byte-level convolutional layer which replaces the conventional linear word projection…”
Get full text
Conference Proceeding -
16
Automating Continual Learning
Published 30-11-2023“…General-purpose learning systems should improve themselves in open-ended fashion in ever-changing environments. Conventional learning algorithms for neural…”
Get full text
Journal Article -
17
Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions
Published 24-10-2023“…Recent studies of the computational power of recurrent neural networks (RNNs) reveal a hierarchy of RNN architectures, given real-time and finite-precision…”
Get full text
Journal Article -
18
Approximating Two-Layer Feedforward Networks for Efficient Transformers
Published 16-10-2023“…How to reduce compute and memory requirements of neural networks (NNs) without sacrificing performance? Many recent works use sparse Mixtures of Experts (MoEs)…”
Get full text
Journal Article -
19
Exploring the Promise and Limits of Real-Time Recurrent Learning
Published 30-05-2023“…Real-time recurrent learning (RTRL) for sequence-processing recurrent neural networks (RNNs) offers certain conceptual advantages over backpropagation through…”
Get full text
Journal Article -
20
Self-Organising Neural Discrete Representation Learning \`a la Kohonen
Published 15-02-2023“…Unsupervised learning of discrete representations in neural networks (NNs) from continuous ones is essential for many modern applications. Vector Quantisation…”
Get full text
Journal Article