AdaShift: Learning Discriminative Self-Gated Neural Feature Activation With an Adaptive Shift Factor
S Cai - Proceedings of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Nonlinearities are decisive in neural representation learning. Traditional Activation (Act)
functions impose fixed inductive biases on neural networks with oriented biological …
functions impose fixed inductive biases on neural networks with oriented biological …
Learning specialized activation functions for physics-informed neural networks
Physics-informed neural networks (PINNs) are known to suffer from optimization difficulty. In
this work, we reveal the connection between the optimization difficulty of PINNs and …
this work, we reveal the connection between the optimization difficulty of PINNs and …
Learning continuous piecewise non-linear activation functions for deep neural networks
Activation functions provide the non-linearity to deep neural networks, which are crucial for
the optimization and performance improvement. In this paper, we propose a learnable …
the optimization and performance improvement. In this paper, we propose a learnable …
Collaboration of experts: Achieving 80% top-1 accuracy on imagenet with 100m flops
In this paper, we propose a Collaboration of Experts (CoE) framework to pool together the
expertise of multiple networks towards a common aim. Each expert is an individual network …
expertise of multiple networks towards a common aim. Each expert is an individual network …
PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond
We present Parametric Piecewise Linear Networks (PPLNs) for temporal vision inference.
Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs …
Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs …