Machine thinking, fast and slow

JF Bonnefon, I Rahwan - Trends in Cognitive Sciences, 2020 - cell.com
Machines do not 'think fast and slow'in the sense that humans do in dual-process models of
cognition. However, the people who create the machines may attempt to emulate or simulate …

Thinking fast and slow in AI

G Booch, F Fabiano, L Horesh, K Kate… - Proceedings of the …, 2021 - ojs.aaai.org
This paper proposes a research direction to advance AI which draws inspiration from
cognitive theories of human decision making. The premise is that if we gain insights about …

From AI ethics principles to data science practice: a reflection and a gap analysis based on recent frameworks and practical experience

I Georgieva, C Lazo, T Timan, AF Van Veenstra - AI and Ethics, 2022 - Springer
In the field of AI ethics, after the introduction of ethical frameworks and the evaluation
thereof, we seem to have arrived at a third wave in which the operationalisation of ethics is …

Taking principles seriously: A hybrid approach to value alignment in artificial intelligence

TW Kim, J Hooker, T Donaldson - Journal of Artificial Intelligence Research, 2021 - jair.org
An important step in the development of value alignment (VA) systems in artificial
intelligence (AI) is understanding how VA can reflect valid ethical principles. We propose …

The roles and modes of human interactions with automated machine learning systems: A critical review and perspectives

TT Khuat, DJ Kedziora, B Gabrys - Foundations and Trends® …, 2023 - nowpublishers.com
As automated machine learning (AutoML) systems continue to progress in both
sophistication and performance, it becomes important to understand the 'how'and 'why'of …

Thinking fast and slow in AI: The role of metacognition

MB Ganapini, M Campbell, F Fabiano, L Horesh… - … Conference on Machine …, 2022 - Springer
Artificial intelligence (AI) still lacks human capabilities, like adaptability, generalizability, self-
control, consistency, common sense, and causal reasoning. Humans achieve some of these …

When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical data

E Awad, S Levine, A Loreggia, N Mattei… - Autonomous Agents and …, 2024 - Springer
Constraining the actions of AI systems is one promising way to ensure that these systems
behave in a way that is morally acceptable to humans. But constraints alone come with …

[PDF][PDF] Fast and slow goal recognition

M Chiari, AE Gerevini, A Loreggia, L Putelli… - PROCEEDINGS OF …, 2024 - iris.unibs.it
Goal recognition is a crucial aspect of understanding the intentions and objectives of agents
by observing some of their actions. The most prominent approaches to goal recognition can …

Voting with random classifiers (VORACE): theoretical and experimental analysis

C Cornelio, M Donini, A Loreggia, MS Pini… - Autonomous Agents and …, 2021 - Springer
In many machine learning scenarios, looking for the best classifier that fits a particular
dataset can be very costly in terms of time and resources. Moreover, it can require deep …

Engineering responsible and explainable models in human-agent collectives

DB Abeywickrama, SD Ramchurn - Applied Artificial Intelligence, 2024 - Taylor & Francis
In human-agent collectives, humans and agents need to work collaboratively and agree on
collective decisions. However, ensuring that agents responsibly make decisions is a …