Kernel mean embedding of distributions: A review and beyond
A Hilbert space embedding of a distribution—in short, a kernel mean embedding—has
recently emerged as a powerful tool for machine learning and statistical inference. The basic …
recently emerged as a powerful tool for machine learning and statistical inference. The basic …
Supervised classification and mathematical optimization
Data mining techniques often ask for the resolution of optimization problems. Supervised
classification, and, in particular, support vector machines, can be seen as a paradigmatic …
classification, and, in particular, support vector machines, can be seen as a paradigmatic …
Handling missing data with graph representation learning
Abstract Machine learning with missing data has been approached in many different ways,
including feature imputation where missing feature values are estimated based on observed …
including feature imputation where missing feature values are estimated based on observed …
[ЦИТИРОВАНИЕ][C] Robust Optimization
A Ben-Tal - Princeton University Press google schola, 2009 - books.google.com
Robust optimization is still a relatively new approach to optimization problems affected by
uncertainty, but it has already proved so useful in real applications that it is difficult to tackle …
uncertainty, but it has already proved so useful in real applications that it is difficult to tackle …
Theory and applications of robust optimization
In this paper we survey the primary research, both theoretical and applied, in the area of
robust optimization (RO). Our focus is on the computational attractiveness of RO …
robust optimization (RO). Our focus is on the computational attractiveness of RO …
Regularization via mass transportation
The goal of regression and classification methods in supervised learning is to minimize the
empirical risk, that is, the expectation of some loss function quantifying the prediction error …
empirical risk, that is, the expectation of some loss function quantifying the prediction error …
Robustness and generalization
We derive generalization bounds for learning algorithms based on their robustness: the
property that if a testing sample is “similar” to a training sample, then the testing error is close …
property that if a testing sample is “similar” to a training sample, then the testing error is close …
Detection of duplicate defect reports using natural language processing
P Runeson, M Alexandersson… - … Conference on Software …, 2007 - ieeexplore.ieee.org
Defect reports are generated from various testing and development activities in software
engineering. Sometimes two reports are submitted that describe the same problem, leading …
engineering. Sometimes two reports are submitted that describe the same problem, leading …
[PDF][PDF] Robustness and Regularization of Support Vector Machines.
We consider regularized support vector machines (SVMs) and show that they are precisely
equivalent to a new robust optimization formulation. We show that this equivalence of robust …
equivalent to a new robust optimization formulation. We show that this equivalence of robust …
Support vector machine classifier with pinball loss
Traditionally, the hinge loss is used to construct support vector machine (SVM) classifiers.
The hinge loss is related to the shortest distance between sets and the corresponding …
The hinge loss is related to the shortest distance between sets and the corresponding …