Decentralized federated learning: Fundamentals, state of the art, frameworks, trends, and challenges
In recent years, Federated Learning (FL) has gained relevance in training collaborative
models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the …
models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the …
A systematic review on affective computing: Emotion models, databases, and recent advances
Affective computing conjoins the research topics of emotion recognition and sentiment
analysis, and can be realized with unimodal or multimodal data, consisting primarily of …
analysis, and can be realized with unimodal or multimodal data, consisting primarily of …
Direct preference optimization: Your language model is secretly a reward model
While large-scale unsupervised language models (LMs) learn broad world knowledge and
some reasoning skills, achieving precise control of their behavior is difficult due to the …
some reasoning skills, achieving precise control of their behavior is difficult due to the …
Harnessing the power of llms in practice: A survey on chatgpt and beyond
This article presents a comprehensive and practical guide for practitioners and end-users
working with Large Language Models (LLMs) in their downstream Natural Language …
working with Large Language Models (LLMs) in their downstream Natural Language …
Holistic evaluation of language models
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …
technologies, but their capabilities, limitations, and risks are not well understood. We present …
Large language model as attributed training data generator: A tale of diversity and bias
Large language models (LLMs) have been recently leveraged as training data generators
for various natural language processing (NLP) tasks. While previous research has explored …
for various natural language processing (NLP) tasks. While previous research has explored …
Raft: Reward ranked finetuning for generative foundation model alignment
Generative foundation models are susceptible to implicit biases that can arise from
extensive unsupervised training data. Such biases can produce suboptimal samples …
extensive unsupervised training data. Such biases can produce suboptimal samples …
[HTML][HTML] Modern language models refute Chomsky's approach to language
ST Piantadosi - From fieldwork to linguistic theory: A tribute to …, 2023 - books.google.com
Modern machine learning has subverted and bypassed the theoretical framework of
Chomsky's generative approach to linguistics, including its core claims to particular insights …
Chomsky's generative approach to linguistics, including its core claims to particular insights …
Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers
Large pretrained language models have shown surprising in-context learning (ICL) ability.
With a few demonstration input-label pairs, they can predict the label for an unseen input …
With a few demonstration input-label pairs, they can predict the label for an unseen input …
Adbench: Anomaly detection benchmark
Given a long list of anomaly detection algorithms developed in the last few decades, how do
they perform with regard to (i) varying levels of supervision,(ii) different types of anomalies …
they perform with regard to (i) varying levels of supervision,(ii) different types of anomalies …