オープン アクセスを義務付けられた論文 - Taiji Suzuki詳細
一般には非公開: 2 件
Bayesian optimization design for finding a maximum tolerated dose combination in phase I clinical trials
A Takahashi, T Suzuki
The International Journal of Biostatistics 18 (1), 39-56, 2022
委任: 科学技術振興機構
Data-Parallel Momentum Diagonal Empirical Fisher (DP-MDEF): Adaptive Gradient Method is Affected by Hessian Approximation and Multi-Class Data
C Xu, K Haruki, T Suzuki, M Ozawa, K Uematsu, R Sakai
2022 21st IEEE International Conference on Machine Learning and Applications …, 2022
委任: 科学技術振興機構
一般公開: 30 件
High-dimensional asymptotics of feature learning: How one gradient step improves the representation
J Ba, MA Erdogdu, T Suzuki, Z Wang, D Wu, G Yang
Advances in Neural Information Processing Systems 35, 37932-37946, 2022
委任: US National Science Foundation, Natural Sciences and Engineering Research …
Diffusion models are minimax optimal distribution estimators
K Oko, S Akiyama, T Suzuki
International Conference on Machine Learning, 26517-26582, 2023
委任: 科学技術振興機構
Generalization of two-layer neural networks: An asymptotic viewpoint
J Ba, M Erdogdu, T Suzuki, D Wu, T Zhang
International conference on learning representations, 2020
委任: Natural Sciences and Engineering Research Council of Canada
Convex analysis of the mean field langevin dynamics
A Nitanda, D Wu, T Suzuki
International Conference on Artificial Intelligence and Statistics, 9741-9757, 2022
委任: Natural Sciences and Engineering Research Council of Canada, 科学技術振興機構
Understanding generalization in deep learning via tensor methods
J Li, Y Sun, J Su, T Suzuki, F Huang
International Conference on Artificial Intelligence and Statistics, 504-515, 2020
委任: US National Science Foundation, US Department of Defense
Understanding the variance collapse of SVGD in high dimensions
J Ba, MA Erdogdu, M Ghassemi, S Sun, T Suzuki, D Wu, T Zhang
International Conference on Learning Representations, 2021
委任: Natural Sciences and Engineering Research Council of Canada
Learning in the presence of low-dimensional structure: a spiked random matrix perspective
J Ba, MA Erdogdu, T Suzuki, Z Wang, D Wu
Advances in Neural Information Processing Systems 36, 17420-17449, 2023
委任: US National Science Foundation, Natural Sciences and Engineering Research …
A Scaling Law for Syn2real Transfer: How Much Is Your Pre-training Effective?
H Mikami, K Fukumizu, S Murai, S Suzuki, Y Kikuchi, T Suzuki, S Maeda, ...
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2022
委任: 科学技術振興機構
Particle dual averaging: Optimization of mean field neural network with global convergence rate analysis
A Nitanda, D Wu, T Suzuki
Advances in Neural Information Processing Systems 34, 19608-19621, 2021
委任: Natural Sciences and Engineering Research Council of Canada
Gradient-based feature learning under structured data
A Mousavi-Hosseini, D Wu, T Suzuki, MA Erdogdu
Advances in Neural Information Processing Systems 36, 71449-71485, 2023
委任: Natural Sciences and Engineering Research Council of Canada, 科学技術振興機構
Improved convergence rate of stochastic gradient Langevin dynamics with variance reduction and its application to optimization
Y Kinoshita, T Suzuki
Advances in Neural Information Processing Systems 35, 19022-19034, 2022
委任: 科学技術振興機構
Uniform-in-time propagation of chaos for the mean-field gradient Langevin dynamics
T Suzuki, A Nitanda, D Wu
The Eleventh International Conference on Learning Representations, 2023
委任: 科学技術振興機構
Approximation and estimation ability of transformers for sequence-to-sequence functions with infinite dimensional input
S Takakura, T Suzuki
International Conference on Machine Learning, 33416-33447, 2023
委任: 科学技術振興機構
Feature learning via mean-field langevin dynamics: classifying sparse parities and beyond
T Suzuki, D Wu, K Oko, A Nitanda
Advances in Neural Information Processing Systems 36, 2024
委任: 科学技術振興機構
Particle dual averaging: optimization of mean field neural network with global convergence rate analysis
A Nitanda, D Wu, T Suzuki
Journal of Statistical Mechanics: Theory and Experiment 2022 (11), 114010, 2022
委任: 科学技術振興機構
Dimension-free convergence rates for gradient Langevin dynamics in RKHS
B Muzellec, K Sato, M Massias, T Suzuki
Conference on Learning Theory, 1356-1420, 2022
委任: 科学技術振興機構
Mean-field Langevin dynamics: Time-space discretization, stochastic gradient, and variance reduction
T Suzuki, D Wu, A Nitanda
Advances in Neural Information Processing Systems 36, 2024
委任: 科学技術振興機構
DIFF2: Differential private optimization via gradient differences for nonconvex distributed learning
T Murata, T Suzuki
International Conference on Machine Learning, 25523-25548, 2023
委任: 科学技術振興機構
公開と助成金に関する情報は、コンピュータ プログラムによって自動的に決定されます