Seguir
Ian En-Hsu Yen
Ian En-Hsu Yen
PhD, Machine Learning Department, Carnegie Mellon University
Dirección de correo verificada de cs.cmu.edu - Página principal
Título
Citado por
Citado por
Año
Representer point selection for explaining deep neural networks
CK Yeh, J Kim, IEH Yen, PK Ravikumar
Advances in neural information processing systems 31, 2018
2922018
PD-Sparse: A Primal and Dual Sparse Approach to Extreme Multiclass and Multilabel Classification
IEH Yen, X Huang, K Zhong, P Ravikumar, IS Dhillon
International Conference on Machine Learning, 2016
2312016
Ppdsparse: A parallel primal-dual sparse method for extreme classification
IEH Yen, X Huang, W Dai, P Ravikumar, I Dhillon, E Xing
Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge …, 2017
1672017
Word mover's embedding: From word2vec to document embedding
L Wu, IEH Yen, K Xu, F Xu, A Balakrishnan, PY Chen, P Ravikumar, ...
arXiv preprint arXiv:1811.01713, 2018
1452018
Sparse Random Features Algorithm as Coordinate Descent in Hilbert Space
IEH Yen, TW Lin, SD Lin, P Ravikumar, IS Dhillon
Advances in Neural Information Processing Systems (NIPS), 2014
712014
Random warping series: A random features method for time-series embedding
L Wu, IEH Yen, J Yi, F Xu, Q Lei, M Witbrock
International Conference on Artificial Intelligence and Statistics, 793-802, 2018
622018
Minimizing flops to learn efficient sparse representations
B Paria, CK Yeh, IEH Yen, N Xu, P Ravikumar, B Póczos
arXiv preprint arXiv:2004.05665, 2020
612020
Rethinking Network Pruning--under the Pre-train and Fine-tune Paradigm
D Xu, IEH Yen, J Zhao, Z Xiao
arXiv preprint arXiv:2104.08682, 2021
582021
Scalable spectral clustering using random binning features
L Wu, PY Chen, IEH Yen, F Xu, Y Xia, C Aggarwal
Proceedings of the 24th ACM SIGKDD International Conference on Knowledge …, 2018
472018
Scalable global alignment graph kernel using random features: From node embedding to graph embedding
L Wu, IEH Yen, Z Zhang, K Xu, L Zhao, X Peng, Y Xia, C Aggarwal
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
412019
On convergence rate of concave-convex procedure
IEH Yen, N Peng, PW Wang, SD Lin
proceedings of the NIPS 2012 optimization workshop, 31-35, 2012
412012
Sparse linear programming via primal and dual augmented coordinate descent
IEH Yen, K Zhong, CJ Hsieh, PK Ravikumar, IS Dhillon
Advances in Neural Information Processing Systems 28, 2015
402015
D2ke: From distance to kernel and embedding
L Wu, IEH Yen, F Xu, P Ravikumar, M Witbrock
arXiv preprint arXiv:1802.04956, 2018
372018
Revisiting Random Binning Feature: Fast Convergence and Strong Parallelizability
L Wu, IEH Yen, J Chen, R Yan
ACM SIGKDD international conference on Knowledge discovery and data mining., 2016
372016
Optimal tests of treatment effects for the overall population and two subpopulations in randomized trials, using sparse linear programming
M Rosenblum, H Liu, EH Yen
Journal of the American Statistical Association 109 (507), 1216-1228, 2014
352014
Proximal quasi-newton for computationally intensive l1-regularized m-estimators
K Zhong, IEH Yen, IS Dhillon, PK Ravikumar
Advances in Neural Information Processing Systems 27, 2014
352014
Sparse progressive distillation: Resolving overfitting under pretrain-and-finetune paradigm
S Huang, D Xu, IEH Yen, Y Wang, SE Chang, B Li, S Chen, M Xie, ...
arXiv preprint arXiv:2110.08190, 2021
312021
Loss decomposition for fast learning in large output spaces
IEH Yen, S Kale, F Yu, D Holtmann-Rice, S Kumar, P Ravikumar
International Conference on Machine Learning, 5640-5649, 2018
262018
Doubly greedy primal-dual coordinate descent for sparse empirical risk minimization
Q Lei, IEH Yen, C Wu, IS Dhillon, P Ravikumar
International Conference on Machine Learning, 2034-2042, 2017
222017
Constant Nullspace Strong Convexity and Fast Convergence of Proximal Methods under High-Dimensional Settings
IEH Yen, CJ Hsieh, P Ravikumar, I Dhillon
Advances in Neural Information Processing Systems, 2014
222014
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20