AdaShift: Learning Discriminative Self-Gated Neural Feature Activation With an Adaptive Shift Factor

S Cai - Proceedings of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Nonlinearities are decisive in neural representation learning. Traditional Activation (Act)
functions impose fixed inductive biases on neural networks with oriented biological …

Learning specialized activation functions for physics-informed neural networks

H Wang, L Lu, S Song, G Huang - arxiv preprint arxiv:2308.04073, 2023 - arxiv.org
Physics-informed neural networks (PINNs) are known to suffer from optimization difficulty. In
this work, we reveal the connection between the optimization difficulty of PINNs and …

Learning continuous piecewise non-linear activation functions for deep neural networks

X Gao, Y Li, W Li, L Duan, L Van Gool… - … on Multimedia and …, 2023 - ieeexplore.ieee.org
Activation functions provide the non-linearity to deep neural networks, which are crucial for
the optimization and performance improvement. In this paper, we propose a learnable …

Collaboration of experts: Achieving 80% top-1 accuracy on imagenet with 100m flops

Y Zhang, Z Chen, Z Zhong - arxiv preprint arxiv:2107.03815, 2021 - arxiv.org
In this paper, we propose a Collaboration of Experts (CoE) framework to pool together the
expertise of multiple networks towards a common aim. Each expert is an individual network …

PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond

C Song, Z Liang, B Sun, Q Huang - arxiv preprint arxiv:2409.19772, 2024 - arxiv.org
We present Parametric Piecewise Linear Networks (PPLNs) for temporal vision inference.
Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs …