Search Results - "Kanehira, Atsushi"

Refine Results
  1. 1

    ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published in IEEE access (2023)
    “…This paper introduces a novel method for translating natural-language instructions into executable robot actions using OpenAI's ChatGPT in a few-shot setting…”
    Get full text
    Journal Article
  2. 2

    Applying learning-from-observation to household service robots: three task common-sense formulations by Ikeuchi, Katsushi, Takamatsu, Jun, Sasabuchi, Kazuhiro, Wake, Naoki, Kanehira, Atsushi

    Published in Frontiers in computer science (Lausanne) (31-07-2024)
    “…Utilizing a robot in a new application requires the robot to be programmed at each time. To reduce such programmings efforts, we have been developing…”
    Get full text
    Journal Article
  3. 3

    Constraint-Aware Policy for Compliant Manipulation by Saito, Daichi, Sasabuchi, Kazuhiro, Wake, Naoki, Kanehira, Atsushi, Takamatsu, Jun, Koike, Hideki, Ikeuchi, Katsushi

    Published in Robotics (Basel) (01-01-2024)
    “…Robot manipulation in a physically constrained environment requires compliant manipulation. Compliant manipulation is a manipulation skill to adjust hand…”
    Get full text
    Journal Article
  4. 4

    Learning to Explain With Complemental Examples by Kanehira, Atsushi, Harada, Tatsuya

    “…This paper addresses the generation of explanations with visual examples. Given an input sample, we build a system that not only classifies it to a specific…”
    Get full text
    Conference Proceeding
  5. 5

    Multi-label Ranking from Positive and Unlabeled Data by Kanehira, Atsushi, Harada, Tatsuya

    “…In this paper, we specifically examine the training of a multi-label classifier from data with incompletely assigned labels. This problem is fundamentally…”
    Get full text
    Conference Proceeding
  6. 6

    Viewpoint-Aware Video Summarization by Kanehira, Atsushi, Van Gool, Luc, Ushiku, Yoshitaka, Harada, Tatsuya

    “…This paper introduces a novel variant of video summarization, namely building a summary that depends on the particular aspect of a video the viewer focuses on…”
    Get full text
    Conference Proceeding
  7. 7

    GPT-4V(ision) for Robotics: Multimodal Task Planning From Human Demonstration by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published in IEEE robotics and automation letters (01-11-2024)
    “…We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation…”
    Get full text
    Journal Article
  8. 8

    Multimodal Explanations by Predicting Counterfactuality in Videos by Kanehira, Atsushi, Takemoto, Kentaro, Inayoshi, Sho, Harada, Tatsuya

    “…This study addresses generating counterfactual explanations with multimodal information. Our goal is not only to classify a video into a specific category, but…”
    Get full text
    Conference Proceeding
  9. 9

    Recognizing Activities of Daily Living with a Wrist-Mounted Camera by Ohnishi, Katsunori, Kanehira, Atsushi, Kanezaki, Asako, Harada, Tatsuya

    “…We present a novel dataset and a novel algorithm for recognizing activities of daily living (ADL) from a first-person wearable camera. Handled objects are…”
    Get full text
    Conference Proceeding
  10. 10

    Hierarchical Lovász Embeddings for Proposal-free Panoptic Segmentation by Kerola, Tommi, Li, Jie, Kanehira, Atsushi, Kudo, Yasunori, Vallet, Alexis, Gaidon, Adrien

    “…Panoptic segmentation brings together two separate tasks: instance and semantic segmentation. Although they are related, unifying them faces an apparent…”
    Get full text
    Conference Proceeding
  11. 11

    True-negative label selection for large-scale multi-label learning by Kanehira, Atsushi, Shin, Andrew, Harada, Tatsuya

    “…In this paper, we focus on training a classifier from large-scale data with incompletely assigned labels. In other words, we treat samples with following…”
    Get full text
    Conference Proceeding
  12. 12

    GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published 26-09-2024
    “…We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation…”
    Get full text
    Journal Article
  13. 13

    Open-Vocabulary Action Localization with Iterative Visual Prompting by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published 30-08-2024
    “…Video action localization aims to find the timings of specific actions from a long video. Although existing learning-based approaches have been successful,…”
    Get full text
    Journal Article
  14. 14

    Interactive Task Encoding System for Learning-from-Observation by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    “…We present the Interactive Task Encoding System (ITES) for teaching robots to perform manipulative tasks. ITES is designed as an input system for the…”
    Get full text
    Conference Proceeding
  15. 15

    Bias in Emotion Recognition with ChatGPT by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published 18-10-2023
    “…This technical report explores the ability of ChatGPT in recognizing emotions from text, which can be the basis of various applications like interactive…”
    Get full text
    Journal Article
  16. 16

    Learning to Explain with Complemental Examples by Kanehira, Atsushi, Harada, Tatsuya

    Published 04-12-2018
    “…This paper addresses the generation of explanations with visual examples. Given an input sample, we build a system that not only classifies it to a specific…”
    Get full text
    Journal Article
  17. 17

    ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published 30-08-2023
    “…This paper demonstrates how OpenAI's ChatGPT can be used in a few-shot setting to convert natural language instructions into a sequence of executable robot…”
    Get full text
    Journal Article
  18. 18

    GPT Models Meet Robotic Applications: Co-Speech Gesturing Chat System by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published 10-05-2023
    “…This technical paper introduces a chatting robot system that utilizes recent advancements in large-scale language models (LLMs) such as GPT-3 and ChatGPT. The…”
    Get full text
    Journal Article
  19. 19

    Interactive Task Encoding System for Learning-from-Observation by Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi

    Published 28-04-2023
    “…We present the Interactive Task Encoding System (ITES) for teaching robots to perform manipulative tasks. ITES is designed as an input system for the…”
    Get full text
    Journal Article
  20. 20

    Learning-from-Observation System Considering Hardware-Level Reusability by Takamatsu, Jun, Sasabuchi, Kazuhiro, Wake, Naoki, Kanehira, Atsushi, Ikeuchi, Katsushi

    Published 18-12-2022
    “…Robot developers develop various types of robots for satisfying users' various demands. Users' demands are related to their backgrounds and robots suitable for…”
    Get full text
    Journal Article