Research
My research interest mainly focuses on using computer vision techniques for robotics and autonomous agents. Particularly, I am interested assistive technologies to improve the lives of disabled individuals.
|
|
Uncertainty-Guided Never-Ending Learning to Drive
Lei Lai,
Eshed Ohn-Bar,
Sanjay Arora,
John Seon Keun Yi
CVPR, 2024
project page /
code /
paper
Continuously learning to drive from an infinite flow of YouTube driving videos.
|
|
A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond
Chaoning Zhang,
Chenshuang Zhang,,
Junha Song,
John Seon Keun Yi,
In So Kweon
IJCAI, 2023
arXiv
A comprehensive survey on Masked Autoencoders(MAE) in vision and other fields.
|
|
PT4AL: Using Self-Supervised Pretext Tasks for Active Learning
John Seon Keun Yi*,
Minseok Seo*,
Jongchan Park,
Dong-Geol Choi
ECCV, 2022
project page /
video /
code /
arXiv
We use simple self-supervised pretext tasks and a loss-based sampler to sample both representative and difficult data from an unlabeled pool.
|
|
Incremental Object Grounding Using Scene Graphs
John Seon Keun Yi*,
Yoonwoo Kim*,
Sonia Chernova
arXiv, 2021
arXiv
We use semantic scene graphs to disambiguate referring expressions in an interactive object grounding scenario. This is effectively useful in scenes with multiple identical objects.
|
|
Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting
Jingdao Chen,
John Seon Keun Yi,
Mark Kahoush,
Erin S. Cho,
Yong Kwon Cho
Sensors, 2020
paper
We use 2D inpainting methods to complete occlusions and imperfections in 3D building point cloud scans.
|
|
Force Interaction Model for Robot Guide Dog
PI:
Sehoon Ha,
Bruce Walker
An HRI model for a guide dog robot that distinguishes different force commands from the attached harness and reacts by adjusting its movement.
|
|
Learning Generalizable Representations by Combining Pretext Tasks
CS8803 LS: Machine Learning with Limited Supervision - Class Project
video
We try to learn domain agnostic generelizable representations that yields good performance on multiple downstream tasks, by leveraging the power of multiple self-supervised pretext tasks. We demonstrate one of the proposed approaches which uses an ensemble of multiple pretext tasks to make final predictions in the downstream tasks.
|
|