متابعة
Dongchan Min
Dongchan Min
بريد إلكتروني تم التحقق منه على kaist.ac.kr - الصفحة الرئيسية
عنوان
عدد مرات الاقتباسات
عدد مرات الاقتباسات
السنة
Meta-stylespeech: Multi-speaker adaptive text-to-speech generation
D Min, DB Lee, E Yang, SJ Hwang
International Conference on Machine Learning, 7748-7759, 2021
1782021
Meta-gmvae: Mixture of gaussian vae for unsupervised meta-learning
DB Lee, D Min, S Lee, SJ Hwang
International Conference on Learning Representations, 2020
572020
Grad-stylespeech: Any-speaker adaptive text-to-speech synthesis with diffusion models
M Kang, D Min, SJ Hwang
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
46*2023
Styletalker: One-shot style-based audio-driven talking head video generation
D Min, M Song, E Ko, SJ Hwang
arXiv preprint arXiv:2208.10922, 2022
132022
StyleLipSync: Style-based personalized lip-sync video generation
T Ki, D Min
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
102023
Learning to generate conditional tri-plane for 3d-aware expression controllable portrait animation
T Ki, D Min, G Chae
European Conference on Computer Vision, 476-493, 2024
32024
Distortion-aware network pruning and feature reuse for real-time video segmentation
H Rhee, D Min, S Hwang, B Andreis, SJ Hwang
arXiv preprint arXiv:2206.09604, 2022
22022
Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization
D Kim, E Ko, H Kim, Y Kim, J Kim, D Min, J Kim, SJ Hwang
arXiv preprint arXiv:2305.19135, 2023
12023
FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait
T Ki, D Min, G Chae
arXiv preprint arXiv:2412.01064, 2024
2024
Meta-StyleSpeech
DC Min
한국과학기술원, 2022
2022
Supplementary Materials for Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
T Ki, D Min, G Chae
StyleLipSync: Style-based Personalized Lip-sync Video Generation Supplementary Material
T Ki, D Min
يتعذر على النظام إجراء العملية في الوقت الحالي. عاود المحاولة لاحقًا.
مقالات 1–12