Decision-focused learning: Foundations, state of the art, benchmark and future opportunities
Decision-focused learning (DFL) is an emerging paradigm that integrates machine learning
(ML) and constrained optimization to enhance decision quality by training ML models in an …
(ML) and constrained optimization to enhance decision quality by training ML models in an …
A review of the gumbel-max trick and its extensions for discrete stochasticity in machine learning
IAM Huijben, W Kool, MB Paulus… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
The Gumbel-max trick is a method to draw a sample from a categorical distribution, given by
its unnormalized (log-) probabilities. Over the past years, the machine learning community …
its unnormalized (log-) probabilities. Over the past years, the machine learning community …
Ordered subgraph aggregation networks
Numerous subgraph-enhanced graph neural networks (GNNs) have emerged recently,
provably boosting the expressive power of standard (message-passing) GNNs. However …
provably boosting the expressive power of standard (message-passing) GNNs. However …
Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning
Abstract The Mixture-of-Experts (MoE) architecture is showing promising results in improving
parameter sharing in multi-task learning (MTL) and in scaling high-capacity neural networks …
parameter sharing in multi-task learning (MTL) and in scaling high-capacity neural networks …
Groomed-nms: Grouped mathematically differentiable nms for monocular 3d object detection
Modern 3D object detectors have immensely benefited from the end-to-end learning idea.
However, most of them use a post-processing algorithm called Non-Maximal Suppression …
However, most of them use a post-processing algorithm called Non-Maximal Suppression …
Deep declarative networks
We explore a class of end-to-end learnable models wherein data processing nodes (or
network layers) are defined in terms of desired behavior rather than an explicit forward …
network layers) are defined in terms of desired behavior rather than an explicit forward …
Unsupervised learning for combinatorial optimization with principled objective relaxation
Using machine learning to solve combinatorial optimization (CO) problems is challenging,
especially when the data is unlabeled. This work proposes an unsupervised learning …
especially when the data is unlabeled. This work proposes an unsupervised learning …
End-to-end learnable EEG channel selection for deep neural networks with Gumbel-softmax
T Strypsteen, A Bertrand - Journal of Neural Engineering, 2021 - iopscience.iop.org
Objective. To develop an efficient, embedded electroencephalogram (EEG) channel
selection approach for deep neural networks, allowing us to match the channel selection to …
selection approach for deep neural networks, allowing us to match the channel selection to …
Compact neural graphics primitives with learned hash probing
Neural graphics primitives are faster and achieve higher quality when their neural networks
are augmented by spatial data structures that hold trainable features arranged in a grid …
are augmented by spatial data structures that hold trainable features arranged in a grid …
Learning with algorithmic supervision via continuous relaxations
The integration of algorithmic components into neural architectures has gained increased
attention recently, as it allows training neural networks with new forms of supervision such …
attention recently, as it allows training neural networks with new forms of supervision such …