[HTML][HTML] A survey on deep learning-based monocular spacecraft pose estimation: Current state, limitations and prospects

L Pauly, W Rharbaoui, C Shneider, A Rathinam… - Acta Astronautica, 2023 - Elsevier
Estimating the pose of an uncooperative spacecraft is an important computer vision problem
for enabling the deployment of automatic vision-based systems in orbit, with applications …

Multi-task learning with deep neural networks: A survey

M Crawshaw - arxiv preprint arxiv:2009.09796, 2020 - arxiv.org
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are
simultaneously learned by a shared model. Such approaches offer advantages like …

A survey on negative transfer

W Zhang, L Deng, L Zhang, D Wu - IEEE/CAA Journal of …, 2022 - ieeexplore.ieee.org
Transfer learning (TL) utilizes data or knowledge from one or more source domains to
facilitate learning in a target domain. It is particularly useful when the target domain has very …

Understanding and improving information transfer in multi-task learning

S Wu, HR Zhang, C Ré - arxiv preprint arxiv:2005.00944, 2020 - arxiv.org
We investigate multi-task learning approaches that use a shared feature representation for
all tasks. To better understand the transfer of task information, we study an architecture with …

Deep active learning: Unified and principled method for query and training

C Shui, F Zhou, C Gagné… - … Conference on Artificial …, 2020 - proceedings.mlr.press
In this paper, we are proposing a unified and principled method for both the querying and
training processes in deep batch active learning. We are providing theoretical insights from …

Reasonable effectiveness of random weighting: A litmus test for multi-task learning

B Lin, F Ye, Y Zhang, IW Tsang - arxiv preprint arxiv:2111.10603, 2021 - arxiv.org
Multi-Task Learning (MTL) has achieved success in various fields. However, how to balance
different tasks to achieve good performance is a key problem. To achieve the task balancing …

Scalarization for multi-task and multi-domain learning at scale

A Royer, T Blankevoort… - Advances in Neural …, 2023 - proceedings.neurips.cc
Training a single model on multiple input domains and/or output tasks allows for
compressing information from multiple sources into a unified backbone hence improves …

Mumu: Cooperative multitask learning-based guided multimodal fusion

MM Islam, T Iqbal - Proceedings of the AAAI conference on artificial …, 2022 - ojs.aaai.org
Multimodal sensors (visual, non-visual, and wearable) can provide complementary
information to develop robust perception systems for recognizing activities accurately …

Aggregating from multiple target-shifted sources

C Shui, Z Li, J Li, C Gagné, CX Ling… - … on Machine Learning, 2021 - proceedings.mlr.press
Multi-source domain adaptation aims at leveraging the knowledge from multiple tasks for
predicting a related target domain. Hence, a crucial aspect is to properly combine different …

Transfer learning via minimizing the performance gap between domains

B Wang, J Mendez, M Cai… - Advances in neural …, 2019 - proceedings.neurips.cc
We propose a new principle for transfer learning, based on a straightforward intuition: if two
domains are similar to each other, the model trained on one domain should also perform …