Search Results - "Engin Erzin"

Refine Results
  1. 1

    Discriminative Analysis of Lip Motion Features for Speaker Identification and Speech-Reading by Cetingul, H.E., Yemez, Y., Engin Erzin, Tekalp, A.M.

    Published in IEEE transactions on image processing (01-10-2006)
    “…There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications…”
    Get full text
    Journal Article
  2. 2

    Multimodal speaker identification using an adaptive classifier cascade based on modality reliability by Engin Erzin, Yemez, Y., Tekalp, A.M.

    Published in IEEE transactions on multimedia (01-10-2005)
    “…We present a multimodal open-set speaker identification system that integrates information coming from audio, face and lip motion modalities. For fusion of…”
    Get full text
    Journal Article
  3. 3

    A Diversity Combination Model Incorporating an Inward Bias for Interaural Time-Level Difference Cue Integration in Sound Lateralization by Mojtahedi, Sina, Erzin, Engin, Ungan, Pekcan

    Published in Applied sciences (01-09-2020)
    “…A sound source with non-zero azimuth leads to interaural time level differences (ITD and ILD). Studies on hearing system imply that these cues are encoded in…”
    Get full text
    Journal Article
  4. 4

    Learn2Dance: Learning Statistical Music-to-Dance Mappings for Choreography Synthesis by Ofli, F., Erzin, E., Yemez, Y., Tekalp, A. M.

    Published in IEEE transactions on multimedia (01-06-2012)
    “…We propose a novel framework for learning many-to-many statistical mappings from musical measures to dance figures towards generating plausible music-driven…”
    Get full text
    Journal Article
  5. 5

    Audio-Facial Laughter Detection in Naturalistic Dyadic Conversations by Berker Turker, Bekir, Yemez, Yucel, Sezgin, T. Metin, Erzin, Engin

    Published in IEEE transactions on affective computing (01-10-2017)
    “…We address the problem of continuous laughter detection over audio-facial input streams obtained from naturalistic dyadic conversations. We first present…”
    Get full text
    Journal Article
  6. 6

    Emotion Dependent Domain Adaptation for Speech Driven Affective Facial Feature Synthesis by Sadiq, Rizwan, Erzin, Engin

    Published in IEEE transactions on affective computing (01-07-2022)
    “…Although speech driven facial animation has been studied extensively in the literature, works focusing on the affective content of the speech are limited. This…”
    Get full text
    Journal Article
  7. 7

    On the importance of hidden bias and hidden entropy in representational efficiency of the Gaussian-Bipolar Restricted Boltzmann Machines by Isabekov, Altynbek, Erzin, Engin

    Published in Neural networks (01-09-2018)
    “…In this paper, we analyze the role of hidden bias in representational efficiency of the Gaussian-Bipolar Restricted Boltzmann Machines (GBPRBMs), which are…”
    Get full text
    Journal Article
  8. 8

    Use of affect context in dyadic interactions for continuous emotion recognition by Fatima, Syeda Narjis, Erzin, Engin

    Published in Speech communication (01-09-2021)
    “…Emotional dependencies play a crucial role in understanding complexities of dyadic interactions. Recent studies have shown that affect recognition tasks can…”
    Get full text
    Journal Article
  9. 9

    Vocal Tract Contour Tracking in rtMRI Using Deep Temporal Regression Network by Asadiabadi, Sasan, Erzin, Engin

    “…Recent advances in real-time Magnetic Resonance Imaging (rtMRI) provide an invaluable tool to study speech articulation. In this paper, we present an effective…”
    Get full text
    Journal Article
  10. 10

    Automatic Vocal Tractlandmark Tracking in Rtmri Using Fully Convolutional Networks and Kalman Filter by Asadiabadi, Sasan, Erzin, Engin

    “…Vocal tract (VT) contour detection in real time MRI is a pre-stage to many speech production related applications such as articulatory analysis and synthesis…”
    Get full text
    Conference Proceeding
  11. 11

    Use of Affective Visual Information for Summarization of Human-Centric Videos by Kopro, Berkay, Erzin, Engin

    Published in IEEE transactions on affective computing (01-10-2023)
    “…The increasing volume of user-generated human-centric video content and its applications, such as video retrieval and browsing, require compact representations…”
    Get full text
    Journal Article
  12. 12

    Affective synthesis and animation of arm gestures from speech prosody by Bozkurt, Elif, Yemez, Yücel, Erzin, Engin

    Published in Speech communication (01-05-2020)
    “…•We investigate the use of continuous affect attributes, which are activation, valence and dominance, for speech-driven affective synthesis and animation of…”
    Get full text
    Journal Article
  13. 13

    Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents by Hussain, Nusrah, Erzin, Engin, Sezgin, T. Metin, Yemez, Yucel

    “…The ability to generate appropriate verbal and nonverbal backchannels by an agent during human-robot interaction greatly enhances the interaction experience…”
    Get full text
    Conference Proceeding
  14. 14

    Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures by Bozkurt, Elif, Yemez, Yücel, Erzin, Engin

    Published in Speech communication (01-12-2016)
    “…•We propose a speech-driven beat gestures synthesis and animation framework.•We use hidden semi-Markov models for the joint analysis of speech and arm…”
    Get full text
    Journal Article
  15. 15

    Training Socially Engaging Robots: Modeling Backchannel Behaviors with Batch Reinforcement Learning by Hussain, Nusrah, Erzin, Engin, Sezgin, T. Metin, Yemez, Yucel

    Published in IEEE transactions on affective computing (01-10-2022)
    “…A key aspect of social human-robot interaction is natural non-verbal communication. In this work, we train an agent with batch reinforcement learning to…”
    Get full text
    Journal Article
  16. 16

    Domain Adaptation for Food Intake Classification With Teacher/Student Learning by Turan, M. A. Tugtekin, Erzin, Engin

    Published in IEEE transactions on multimedia (2021)
    “…Automatic dietary monitoring (ADM) stands as a challenging application in wearable healthcare technologies. In this paper, we define an ADM to perform food…”
    Get full text
    Journal Article
  17. 17

    Use of affect based interaction classification for continuous emotion tracking by Khaki, Hossein, Erzin, Engin

    “…Natural and affective handshakes of two participants define the course of dyadic interaction. Affective states of the participants are expected to be…”
    Get full text
    Conference Proceeding
  18. 18

    Improving phoneme recognition of throat microphone speech recordings using transfer learning by Turan, M.A. Tuğtekin, Erzin, Engin

    Published in Speech communication (01-05-2021)
    “…Throat microphones (TM) are a type of skin-attached non-acoustic sensors, which are robust to environmental noise but carry a lower signal bandwidth…”
    Get full text
    Journal Article
  19. 19

    AffectON: Incorporating Affect Into Dialog Generation by Bucinca, Zana, Yemez, Yucel, Erzin, Engin, Sezgin, Metin

    Published in IEEE transactions on affective computing (01-01-2023)
    “…Due to its expressivity, natural language is paramount for explicit and implicit affective state communication among humans. The same linguistic inquiry (e.g.,…”
    Get full text
    Journal Article
  20. 20

    Affect recognition from lip articulations by Sadiq, Rizwan, Erzin, Engin

    “…Lips deliver visually active clues for speech articulation. Affective states define how humans articulate speech; hence, they also change articulation of lip…”
    Get full text
    Conference Proceeding