Meta-stylespeech: Multi-speaker adaptive text-to-speech generation D Min, DB Lee, E Yang, SJ Hwang International Conference on Machine Learning, 7748-7759, 2021 | 178 | 2021 |
Meta-gmvae: Mixture of gaussian vae for unsupervised meta-learning DB Lee, D Min, S Lee, SJ Hwang International Conference on Learning Representations, 2020 | 57 | 2020 |
Grad-stylespeech: Any-speaker adaptive text-to-speech synthesis with diffusion models M Kang, D Min, SJ Hwang ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023 | 46* | 2023 |
Styletalker: One-shot style-based audio-driven talking head video generation D Min, M Song, E Ko, SJ Hwang arXiv preprint arXiv:2208.10922, 2022 | 13 | 2022 |
StyleLipSync: Style-based personalized lip-sync video generation T Ki, D Min Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 10 | 2023 |
Learning to generate conditional tri-plane for 3d-aware expression controllable portrait animation T Ki, D Min, G Chae European Conference on Computer Vision, 476-493, 2024 | 3 | 2024 |
Distortion-aware network pruning and feature reuse for real-time video segmentation H Rhee, D Min, S Hwang, B Andreis, SJ Hwang arXiv preprint arXiv:2206.09604, 2022 | 2 | 2022 |
Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization D Kim, E Ko, H Kim, Y Kim, J Kim, D Min, J Kim, SJ Hwang arXiv preprint arXiv:2305.19135, 2023 | 1 | 2023 |
FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait T Ki, D Min, G Chae arXiv preprint arXiv:2412.01064, 2024 | | 2024 |
Meta-StyleSpeech DC Min 한국과학기술원, 2022 | | 2022 |
Supplementary Materials for Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation T Ki, D Min, G Chae | | |
StyleLipSync: Style-based Personalized Lip-sync Video Generation Supplementary Material T Ki, D Min | | |