Decentralized federated learning: Fundamentals, state of the art, frameworks, trends, and challenges

ETM Beltrán, MQ Pérez, PMS Sánchez… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
In recent years, Federated Learning (FL) has gained relevance in training collaborative
models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the …

A systematic review on affective computing: Emotion models, databases, and recent advances

Y Wang, W Song, W Tao, A Liotta, D Yang, X Li, S Gao… - Information …, 2022 - Elsevier
Affective computing conjoins the research topics of emotion recognition and sentiment
analysis, and can be realized with unimodal or multimodal data, consisting primarily of …

Direct preference optimization: Your language model is secretly a reward model

R Rafailov, A Sharma, E Mitchell… - Advances in …, 2024 - proceedings.neurips.cc
While large-scale unsupervised language models (LMs) learn broad world knowledge and
some reasoning skills, achieving precise control of their behavior is difficult due to the …

Harnessing the power of llms in practice: A survey on chatgpt and beyond

J Yang, H **, R Tang, X Han, Q Feng, H Jiang… - ACM Transactions on …, 2024 - dl.acm.org
This article presents a comprehensive and practical guide for practitioners and end-users
working with Large Language Models (LLMs) in their downstream Natural Language …

Holistic evaluation of language models

P Liang, R Bommasani, T Lee, D Tsipras… - arxiv preprint arxiv …, 2022 - arxiv.org
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …

Large language model as attributed training data generator: A tale of diversity and bias

Y Yu, Y Zhuang, J Zhang, Y Meng… - Advances in …, 2024 - proceedings.neurips.cc
Large language models (LLMs) have been recently leveraged as training data generators
for various natural language processing (NLP) tasks. While previous research has explored …

Raft: Reward ranked finetuning for generative foundation model alignment

H Dong, W **ong, D Goyal, Y Zhang, W Chow… - arxiv preprint arxiv …, 2023 - arxiv.org
Generative foundation models are susceptible to implicit biases that can arise from
extensive unsupervised training data. Such biases can produce suboptimal samples …

[HTML][HTML] Modern language models refute Chomsky's approach to language

ST Piantadosi - From fieldwork to linguistic theory: A tribute to …, 2023 - books.google.com
Modern machine learning has subverted and bypassed the theoretical framework of
Chomsky's generative approach to linguistics, including its core claims to particular insights …

Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers

D Dai, Y Sun, L Dong, Y Hao, S Ma, Z Sui… - arxiv preprint arxiv …, 2022 - arxiv.org
Large pretrained language models have shown surprising in-context learning (ICL) ability.
With a few demonstration input-label pairs, they can predict the label for an unseen input …

Adbench: Anomaly detection benchmark

S Han, X Hu, H Huang, M Jiang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Given a long list of anomaly detection algorithms developed in the last few decades, how do
they perform with regard to (i) varying levels of supervision,(ii) different types of anomalies …