Search Results - "Irie, Kazuki"

Refine Results
  1. 1

    Furry protein suppresses nuclear localization of yes-associated protein (YAP) by activating NDR kinase and binding to YAP by Irie, Kazuki, Nagai, Tomoaki, Mizuno, Kensaku

    Published in The Journal of biological chemistry (06-03-2020)
    “…The Hippo signaling pathway suppresses cell proliferation and tumorigenesis. In the canonical Hippo pathway, large tumor suppressor kinases 1/2 (LATS1/2)…”
    Get full text
    Journal Article
  2. 2

    A Comparison of Transformer and LSTM Encoder Decoder Models for ASR by Zeyer, Albert, Bahar, Parnia, Irie, Kazuki, Schluter, Ralf, Ney, Hermann

    “…We present competitive results using a Transformer encoder-decoder-attention model for end-to-end speech recognition needing less training time compared to a…”
    Get full text
    Conference Proceeding
  3. 3

    The Rwth Asr System for Ted-Lium Release 2: Improving Hybrid Hmm With Specaugment by Zhou, Wei, Michel, Wilfried, Irie, Kazuki, Kitza, Markus, Schluter, Ralf, Ney, Hermann

    “…We present a complete training pipeline to build a state-of-the-art hybrid HMM-based ASR system on the 2nd release of the TED-LIUM corpus. Data augmentation…”
    Get full text
    Conference Proceeding
  4. 4

    RADMM: Recurrent Adaptive Mixture Model with Applications to Domain Robust Language Modeling by Irie, Kazuki, Kumar, Shankar, Nirschl, Michael, Liao, Hank

    “…We present a new architecture and a training strategy for an adaptive mixture of experts with applications to domain robust language modeling. The proposed…”
    Get full text
    Conference Proceeding
  5. 5

    How Much Self-Attention Do We Need? Trading Attention for Feed-Forward Layers by Irie, Kazuki, Gerstenberger, Alexander, Schluter, Ralf, Ney, Hermann

    “…We propose simple architectural modifications in the standard Transformer with the goal to reduce its total state size (defined as the number of self-attention…”
    Get full text
    Conference Proceeding
  6. 6

    Prediction of LSTM-RNN Full Context States as a Subtask for N-Gram Feedforward Language Models by Irie, Kazuki, Lei, Zhihong, Schluter, Ralf, Ney, Hermann

    “…Long short-term memory (LSTM) recurrent neural network language models compress the full context of variable lengths into a fixed size vector. In this work, we…”
    Get full text
    Conference Proceeding
  7. 7

    Domain Robust, Fast, and Compact Neural Language Models by Gerstenberger, Alexander, Irie, Kazuki, Golik, Pavel, Beck, Eugen, Ney, Hermann

    “…Despite advances in neural language modeling, obtaining a good model on a large scale multi-domain dataset still remains a difficult task. We propose training…”
    Get full text
    Conference Proceeding
  8. 8

    Accelerating Neural Self-Improvement via Bootstrapping by Irie, Kazuki, Schmidhuber, Jürgen

    Published 02-05-2023
    “…Few-shot learning with sequence-processing neural networks (NNs) has recently attracted a new wave of attention in the context of large language models. In the…”
    Get full text
    Journal Article
  9. 9

    Neural networks that overcome classic challenges through practice by Irie, Kazuki, Lake, Brenden M

    Published 14-10-2024
    “…Since the earliest proposals for neural network models of the mind and brain, critics have pointed out key weaknesses in these models compared to human…”
    Get full text
    Journal Article
  10. 10

    Learning to Control Rapidly Changing Synaptic Connections: An Alternative Type of Memory in Sequence Processing Artificial Neural Networks by Irie, Kazuki, Schmidhuber, Jürgen

    Published 17-11-2022
    “…Short-term memory in standard, general-purpose, sequence-processing recurrent neural networks (RNNs) is stored as activations of nodes or "neurons."…”
    Get full text
    Journal Article
  11. 11

    Images as Weight Matrices: Sequential Image Generation Through Synaptic Learning Rules by Irie, Kazuki, Schmidhuber, Jürgen

    Published 07-10-2022
    “…Work on fast weight programmers has demonstrated the effectiveness of key/value outer product-based learning rules for sequentially generating a weight matrix…”
    Get full text
    Journal Article
  12. 12

    Training Language Models for Long-Span Cross-Sentence Evaluation by Irie, Kazuki, Zeyer, Albert, Schluter, Ralf, Ney, Hermann

    “…While recurrent neural networks can motivate cross-sentence language modeling and its application to automatic speech recognition (ASR), corresponding…”
    Get full text
    Conference Proceeding
  13. 13

    Investigation on log-linear interpolation of multi-domain neural network language model by Tuske, Zoltan, Irie, Kazuki, Schluter, Ralf, Ney, Hermann

    “…Inspired by the success of multi-task training in acoustic modeling, this paper investigates a new architecture for a multi-domain neural network based…”
    Get full text
    Conference Proceeding Journal Article
  14. 14

    Training and Generating Neural Networks in Compressed Weight Space by Irie, Kazuki, Schmidhuber, Jürgen

    Published 31-12-2021
    “…The inputs and/or outputs of some neural nets are weight matrices of other neural nets. Indirect encodings or end-to-end compression of weight matrices could…”
    Get full text
    Journal Article
  15. 15

    Investigations on byte-level convolutional neural networks for language modeling in low resource speech recognition by Irie, Kazuki, Golik, Pavel, Schluter, Ralf, Ney, Hermann

    “…In this paper, we present an investigation on technical details of the byte-level convolutional layer which replaces the conventional linear word projection…”
    Get full text
    Conference Proceeding
  16. 16

    Automating Continual Learning by Irie, Kazuki, Csordás, Róbert, Schmidhuber, Jürgen

    Published 30-11-2023
    “…General-purpose learning systems should improve themselves in open-ended fashion in ever-changing environments. Conventional learning algorithms for neural…”
    Get full text
    Journal Article
  17. 17

    Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions by Irie, Kazuki, Csordás, Róbert, Schmidhuber, Jürgen

    Published 24-10-2023
    “…Recent studies of the computational power of recurrent neural networks (RNNs) reveal a hierarchy of RNN architectures, given real-time and finite-precision…”
    Get full text
    Journal Article
  18. 18

    Approximating Two-Layer Feedforward Networks for Efficient Transformers by Csordás, Róbert, Irie, Kazuki, Schmidhuber, Jürgen

    Published 16-10-2023
    “…How to reduce compute and memory requirements of neural networks (NNs) without sacrificing performance? Many recent works use sparse Mixtures of Experts (MoEs)…”
    Get full text
    Journal Article
  19. 19

    Exploring the Promise and Limits of Real-Time Recurrent Learning by Irie, Kazuki, Gopalakrishnan, Anand, Schmidhuber, Jürgen

    Published 30-05-2023
    “…Real-time recurrent learning (RTRL) for sequence-processing recurrent neural networks (RNNs) offers certain conceptual advantages over backpropagation through…”
    Get full text
    Journal Article
  20. 20

    Self-Organising Neural Discrete Representation Learning \`a la Kohonen by Irie, Kazuki, Csordás, Róbert, Schmidhuber, Jürgen

    Published 15-02-2023
    “…Unsupervised learning of discrete representations in neural networks (NNs) from continuous ones is essential for many modern applications. Vector Quantisation…”
    Get full text
    Journal Article