Найдено 77
Fairness in Algorithmic Profiling: The AMAS Case
Achterhold E., Mühlböck M., Steiber N., Kern C.
Q1
Springer Nature
Minds and Machines, 2025, цитирований: 0, doi.org, Abstract
Abstract We study a controversial application of algorithmic profiling in the public sector, the Austrian AMAS system. AMAS was supposed to help caseworkers at the Public Employment Service (PES) Austria to allocate support measures to job seekers based on their predicted chance of (re-)integration into the labor market. Shortly after its release, AMAS was criticized for its apparent unequal treatment of job seekers based on gender and citizenship. We systematically investigate the AMAS model using a novel real-world dataset of young job seekers from Vienna, which allows us to provide the first empirical evaluation of the AMAS model with a focus on fairness measures. We further apply bias mitigation strategies to study their effectiveness in our real-world setting. Our findings indicate that the prediction performance of the AMAS model is insufficient for use in practice, as more than 30% of job seekers would be misclassified in our use case. Further, our results confirm that the original model is biased with respect to gender as it tends to (incorrectly) assign women to the group with high chances of re-employment, which is not prioritized in the PES’ allocation of support measures. However, most bias mitigation strategies were able to improve fairness without compromising performance and thus may form an important building block in revising profiling schemes in the present context.
Statistical Learning Theory and Occam’s Razor: The Core Argument
Sterkenburg T.F.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 0, doi.org, Abstract
AbstractStatistical learning theory is often associated with the principle of Occam’s razor, which recommends a simplicity preference in inductive inference. This paper distills the core argument for simplicity obtainable from statistical learning theory, built on the theory’s central learning guarantee for the method of empirical risk minimization. This core “means-ends” argument is that a simpler hypothesis class or inductive model is better because it has better learning guarantees; however, these guarantees are model-relative and so the theoretical push towards simplicity is checked by our prior knowledge.
Effective Human Oversight of AI-Based Systems: A Signal Detection Perspective on the Detection of Inaccurate and Unfair Outputs
Langer M., Baum K., Schlicker N.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 0, doi.org, Abstract
AbstractLegislation and ethical guidelines around the globe call for effective human oversight of AI-based systems in high-risk contexts – that is oversight that reliably reduces the risks otherwise associated with the use of AI-based systems. Such risks may relate to the imperfect accuracy of systems (e.g., inaccurate classifications) or to ethical concerns (e.g., unfairness of outputs). Given the significant role that human oversight is expected to play in the operation of AI-based systems, it is crucial to better understand the conditions for effective human oversight. We argue that the reliable detection of errors (as an umbrella term for inaccuracies and unfairness) is crucial for effective human oversight. We then propose that Signal Detection Theory (SDT) offers a promising framework for better understanding what affects people’s sensitivity (i.e., how well they are able to detect errors) and response bias (i.e., the tendency to report errors given a perceived evidence of an error) in detecting errors. Whereas an SDT perspective on the detection of inaccuracies is straightforward, we demonstrate its broader applicability by detailing the specifics for an SDT perspective on unfairness detection, including the need to choose a standard for (un)fairness. Additionally, we illustrate that an SDT perspective helps to better understand the conditions for effective error detection by showing examples of task-, system-, and person-related factors that may affect the sensitivity and response bias of humans tasked with detecting unfairness associated with the use of AI-based systems. Finally, we discuss future research directions for an SDT perspective on error detection.
Mapping the Ethics of Generative AI: A Comprehensive Scoping Review
Hagendorff T.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 11, Обзор, doi.org, Abstract
AbstractThe advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.
Toward a Responsible Fairness Analysis: From Binary to Multiclass and Multigroup Assessment in Graph Neural Network-Based User Modeling Tasks
Purificato E., Boratto L., De Luca E.W.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 1, doi.org, Abstract
AbstractUser modeling is a key topic in many applications, mainly social networks and information retrieval systems. To assess the effectiveness of a user modeling approach, its capability to classify personal characteristics (e.g., the gender, age, or consumption grade of the users) is evaluated. Due to the fact that some of the attributes to predict are multiclass (e.g., age usually encompasses multiple ranges), assessing fairness in user modeling becomes a challenge since most of the related metrics work with binary attributes. As a workaround, the original multiclass attributes are usually binarized to meet standard fairness metrics definitions where both the target class and sensitive attribute (such as gender or age) are binary. However, this alters the original conditions, and fairness is evaluated on classes that differ from those used in the classification. In this article, we extend the definitions of four existing fairness metrics (related to disparate impact and disparate mistreatment) from binary to multiclass scenarios, considering different settings where either the target class or the sensitive attribute includes more than two groups. Our work endeavors to bridge the gap between formal definitions and real use cases in bias detection. The results of the experiments, conducted on four real-world datasets by leveraging two state-of-the-art graph neural network-based models for user modeling, show that the proposed generalization of fairness metrics can lead to a more effective and fine-grained comprehension of disadvantaged sensitive groups and, in some cases, to a better analysis of machine learning models originally deemed to be fair. The source code and the preprocessed datasets are available at the following link: https://github.com/erasmopurif/toward-responsible-fairness-analysis.
Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena
Freiesleben T., König G., Molnar C., Tejero-Cantero Á.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 1, doi.org, Abstract
AbstractTo learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods—termed ’property descriptors’—that illuminate not just the model, but also the phenomenon it represents. We demonstrate that property descriptors, grounded in statistical learning theory, can effectively reveal relevant properties of the joint probability distribution of the observational data. We identify existing IML methods suited for scientific inference and provide a guide for developing new descriptors with quantified epistemic uncertainty. Our framework empowers scientists to harness ML models for inference, and provides directions for future IML research to support scientific understanding.
“The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making
Szafran D., Bach R.L.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 0, doi.org, Abstract
AbstractThe increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on technical fairness notions, they do not consider how members of the public, as most affected subjects by algorithmic decisions, perceive fairness in ADM. To shed light on subjective fairness perceptions of individuals, this study analyzes individuals’ answers to open-ended fairness questions about hypothetical ADM scenarios that were embedded in the German Internet Panel (Wave 54, July 2021), a probability-based longitudinal online survey. Respondents evaluated the fairness of vignettes describing the use of ADM tools across different contexts. Subsequently, they explained their fairness evaluation providing a textual answer. Using qualitative content analysis, we inductively coded those answers (N = 3697). Based on their individual understanding of fairness, respondents addressed a wide range of aspects related to fairness in ADM which is reflected in the 23 codes we identified. We subsumed those codes under four overarching themes: Human elements in decision-making, Shortcomings of the data, Social impact of AI, and Properties of AI. Our codes and themes provide a valuable resource for understanding which factors influence public fairness perceptions about ADM.
The New Mechanistic Approach and Cognitive Ontology—Or: What Role do (Neural) Mechanisms Play in Cognitive Ontology?
Krickel B.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 0, doi.org, Abstract
AbstractCognitive ontology has become a popular topic in philosophy, cognitive psychology, and cognitive neuroscience. At its center is the question of which cognitive capacities should be included in the ontology of cognitive psychology and cognitive neuroscience. One common strategy for answering this question is to look at brain structures and determine the cognitive capacities for which they are responsible. Some authors interpret this strategy as a search for neural mechanisms, as understood by the so-called new mechanistic approach. In this article, I will show that this new mechanistic answer is confronted with what I call the triviality problem. A discussion of this problem will show that one cannot derive a meaningful cognitive ontology from neural mechanisms alone. Nonetheless, neural mechanisms play a crucial role in the discovery of a cognitive ontology because they are epistemic proxies for best systematizations.
Black-Box Testing and Auditing of Bias in ADM Systems
Krafft T.D., Hauer M.P., Zweig K.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 0, doi.org, Abstract
AbstractFor years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or other external parties. Scientific testing and auditing literature rarely focuses on the specific needs for such investigations and suffers from ambiguous terminologies. With this paper, we aim to support this investigation process by collecting, explaining, and categorizing methods of testing for bias, which are applicable to black-box systems, given that inputs and respective outputs can be observed. For this purpose, we provide a taxonomy that can be used to select suitable test methods adapted to the respective situation. This taxonomy takes multiple aspects into account, for example the effort to implement a given test method, its technical requirement (such as the need of ground truth) and social constraints of the investigation, e.g., the protection of business secrets. Furthermore, we analyze which test method can be used in the context of which black box audit concept. It turns out that various factors, such as the type of black box audit or the lack of an oracle, may limit the selection of applicable tests. With the help of this paper, people or organizations who want to test an ADM system for bias can identify which test methods and auditing concepts are applicable and what implications they entail.
Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems
Kozcuer C., Mollen A., Bießmann F.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 1, doi.org, Abstract
AbstractResearch on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a case study on a disaster response system using images from online social media. In the presented case, ML systems are used as a support tool in categorizing and classifying images from social media after a disaster event as an almost instantly available source of information for coordinating disaster response. We present an empirical analysis assessing the transnational fairness of the application’s outputs-based on national socio-demographic development indicators as potentially discriminatory attributes. In doing so, the paper combines interdisciplinary perspectives from data analytics, ML, digital media studies and media sociology in order to address fairness beyond the technical system. The case study investigated reflects an embedded perspective of peoples’ everyday media use and social media platforms as the producers of sociality and processing data-with relevance far beyond the case of algorithmic fairness in disaster scenarios. Especially in light of the concentration of artificial intelligence (AI) development in the Global North and a perceived hegemonic constellation, we argue that transnational fairness offers a perspective on global injustices in relation to AI development and application that has the potential to substantiate discussions by identifying gaps in data and technology. These analyses ultimately will enable researchers and policy makers to derive actionable insights that could alleviate existing problems with fair use of AI technology and mitigate risks associated with future developments.
AI Within Online Discussions: Rational, Civil, Privileged?
Carstens J.A., Friess D.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 1, doi.org, Abstract
AbstractWhile early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the detection of argumentative structures in argument mining or verbal markers of supposedly uncivil comments. If the fairness of such tools is considered, the focus lies on data bias and an input–output frame of the problem. We argue that looking beyond bias and analyzing such applications through a sociotechnical frame reveals how they interact with social hierarchies and inequalities, reproducing patterns of exclusion. The current focus on verbal markers of incivility and argument mining risks excluding minority voices and privileges those who have more access to education. Finally, we present a normative argument why examining AI tools for online discourses through a sociotechnical frame is ethically preferable, as ignoring the predicable negative effects we describe would present a form of objectionable indifference.
Philosophical Lessons for Emotion Recognition Technology
Waelen R.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 1, doi.org, Abstract
AbstractEmotion recognition technology uses artificial intelligence to make inferences about a person’s emotions, on the basis of their facial expressions, body language, tone of voice, or other types of input. Underlying such technology are a variety of assumptions about the manifestation, nature, and value of emotions. To assure the quality and desirability of emotion recognition technology, it is important to critically assess the assumptions embedded in the technology. Within philosophy, there is a long tradition of epistemological, ontological, phenomenological, and ethical reflection on the manifestation, nature, and value of emotions. This article draws from this tradition of philosophy of emotions, in order to challenge the assumptions underlying current emotion recognition technology and to promote a more critical engagement with the concept of emotions in the tech-industry.
Gamification, Side Effects, and Praise and Blame for Outcomes
Nyholm S.
Q1
Springer Nature
Minds and Machines, 2024, цитирований: 2, doi.org, Abstract
Abstract“Gamification” refers to adding game-like elements to non-game activities so as to encourage participation. Gamification is used in various contexts: apps on phones motivating people to exercise, employers trying to encourage their employees to work harder, social media companies trying to stimulate user engagement, and so on and so forth. Here, I focus on gamification with this property: the game-designer (a company or other organization) creates a “game” in order to encourage the players (the users) to bring about certain outcomes as a side effect of playing the game. The side effect might be good for the user (e.g., improving her health) and/or good for the company or organization behind the game (e.g., advertising their products, increasing their profits, etc.). The “players” of the game may or may not be aware of creating these side effects; and they may or may not approve of/endorse the creation of those side effects. The organizations behind the games, in contrast, are typically directly aiming to create games that have the side effects in question. These aspects of gamification are puzzling and interesting from the point of view of philosophical analyses of agency and responsibility for outcomes. In this paper, I relate these just-mentioned aspects of gamification to some philosophical discussions of responsibility gaps, the ethics of side effects (including the Knobe effect and the doctrine of double effect), and ideas about the relations among different parties’ agency.
The Principle-at-Risk Analysis (PaRA): Operationalising Digital Ethics by Bridging Principles and Operations of a Digital Ethics Advisory Panel
Nemat A.T., Becker S.J., Lucas S., Thomas S., Gadea I., Charton J.E.
Q1
Springer Nature
Minds and Machines, 2023, цитирований: 1, doi.org, Abstract
AbstractRecent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at many organisations – the work of an interdisciplinary ethics panel. The PaRA tool serves to guide and harmonise the work of the Digital Ethics Advisory Panel at the multinational science and technology company Merck KGaA in alignment with the principles outlined in the company’s Code of Digital Ethics. We examine how such a tool can be used as part of a multifaceted approach to operationalise high-level principles at an organisational level and provide general requirements for its implementation. We showcase its application in an example case dealing with the comprehensibility of consent forms in a data-sharing context at Syntropy, a collaborative technology platform for clinical research.
Reasoning with Concepts: A Unifying Framework
Gärdenfors P., Osta-Vélez M.
Q1
Springer Nature
Minds and Machines, 2023, цитирований: 5, doi.org, Abstract
AbstractOver the past few decades, cognitive science has identified several forms of reasoning that make essential use of conceptual knowledge. Despite significant theoretical and empirical progress, there is still no unified framework for understanding how concepts are used in reasoning. This paper argues that the theory of conceptual spaces is capable of filling this gap. Our strategy is to demonstrate how various inference mechanisms which clearly rely on conceptual information—including similarity, typicality, and diagnosticity-based reasoning—can be modeled using principles derived from conceptual spaces. Our first topic analyzes the role of expectations in inductive reasoning and their relation to the structure of our concepts. We examine the relationship between using generic expressions in natural language and common-sense reasoning as a second topic. We propose that the strength of a generic can be described by distances between properties and prototypes in conceptual spaces. Our third topic is category-based induction. We demonstrate that the theory of conceptual spaces can serve as a comprehensive model for this type of reasoning. The final topic is analogy. We review some proposals in this area, present a taxonomy of analogical relations, and show how to model them in terms of distances in conceptual spaces. We also briefly discuss the implications of the model for reasoning with concepts in artificial systems.
An Alternative to Cognitivism: Computational Phenomenology for Deep Learning
Beckmann P., Köstner G., Hipólito I.
Q1
Springer Nature
Minds and Machines, 2023, цитирований: 6, doi.org, Abstract
AbstractWe propose a non-representationalist framework for deep learning relying on a novel method computational phenomenology, a dialogue between the first-person perspective (relying on phenomenology) and the mechanisms of computational models. We thereby propose an alternative to the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities. This interpretation mainly relies on neuro-representationalism, a position that combines a strong ontological commitment towards scientific theoretical entities and the idea that the brain operates on symbolic representations of these entities. We proceed as follows: after offering a review of cognitivism and neuro-representationalism in the field of deep learning, we first elaborate a phenomenological critique of these positions; we then sketch out computational phenomenology and distinguish it from existing alternatives; finally we apply this new method to deep learning models trained on specific tasks, in order to formulate a conceptual framework of deep-learning, that allows one to think of artificial neural networks’ mechanisms in terms of lived experience.
True Turing: A Bird’s-Eye View
Daylight E.
Q1
Springer Nature
Minds and Machines, 2023, цитирований: 3, doi.org, Abstract
AbstractAlan Turing is often portrayed as a materialist in secondary literature. In the present article, I suggest that Turing was instead an idealist, inspired by Cambridge scholars, Arthur Eddington, Ernest Hobson, James Jeans and John McTaggart. I outline Turing’s developing thoughts and his legacy in the USA to date. Specifically, I contrast Turing’s two notions of computability (both from 1936) and distinguish between Turing’s “machine intelligence” in the UK and the more well-known “artificial intelligence” in the USA. According to my proposed historical interpretation, Turing did not view computations in the real world to be exhaustively and deterministically characterized by his automatic machines from 1936.
Developing Artificial Human-Like Arithmetical Intelligence (and Why)
Pantsar M.
Q1
Springer Nature
Minds and Machines, 2023, цитирований: 4, doi.org, Abstract
AbstractWhy would we want to develop artificial human-like arithmetical intelligence, when computers already outperform humans in arithmetical calculations? Aside from arithmetic consisting of much more than mere calculations, one suggested reason is that AI research can help us explain the development of human arithmetical cognition. Here I argue that this question needs to be studied already in the context of basic, non-symbolic, numerical cognition. Analyzing recent machine learning research on artificial neural networks, I show how AI studies could potentially shed light on the development of human numerical abilities, from the proto-arithmetical abilities of subitizing and estimating to counting procedures. Although the current results are far from conclusive and much more work is needed, I argue that AI research should be included in the interdisciplinary toolbox when we try to explain the development and character of numerical cognition and arithmetical intelligence. This makes it relevant also for the epistemology of mathematics.
How a Minimal Learning Agent can Infer the Existence of Unobserved Variables in a Complex Environment
Eva B., Ried K., Müller T., Briegel H.J.
Q1
Springer Nature
Minds and Machines, 2022, цитирований: 5, doi.org, Abstract
AbstractAccording to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is amongst the most characteristic indicators of meaningful deliberative thought in an organism or agent. In this article, we show how the ability to develop and utilise abstract conceptual structures can be achieved by a particular kind of learning agent. More specifically, we provide and motivate a concrete operational definition of what it means for these agents to be in possession of abstract concepts, before presenting an explicit example of a minimal architecture that supports this capability. We then proceed to demonstrate how the existence of abstract conceptual structures can be operationally useful in the process of employing previously acquired knowledge in the face of new experiences, thereby vindicating the natural conjecture that the cognitive functions of abstraction and generalisation are closely related.
A Model Solution: On the Compatibility of Predictive Processing and Embodied Cognition
Kersten L.
Q1
Springer Nature
Minds and Machines, 2022, цитирований: 4, doi.org, Abstract
AbstractPredictive processing (PP) and embodied cognition (EC) have emerged as two influential approaches within cognitive science in recent years. Not only have PP and EC been heralded as “revolutions” and “paradigm shifts” but they have motivated a number of new and interesting areas of research. This has prompted some to wonder how compatible the two views might be. This paper looks to weigh in on the issue of PP-EC compatibility. After outlining two recent proposals, I argue that further clarity can be achieved on the issue by considering a model of scientific progress. Specifically, I suggest that Larry Laudan’s “problem solving model” can provide important insights into a number of outstanding challenges that face existing accounts of PP-EC compatibility. I conclude by outlining additional implications of the problem solving model for PP and EC more generally.
Causal and Evidential Conditionals
Günther M.
Q1
Springer Nature
Minds and Machines, 2022, цитирований: 2, doi.org, Abstract
We put forth an account for when to believe causal and evidential conditionals. The basic idea is to embed a causal model in an agent’s belief state. For the evaluation of conditionals seems to be relative to beliefs about both particular facts and causal relations. Unlike other attempts using causal models, we show that ours can account rather well not only for various causal but also evidential conditionals.
From representations in predictive processing to degrees of representational features
Rutar D., Wiese W., Kwisthout J.
Q1
Springer Nature
Minds and Machines, 2022, цитирований: 2, doi.org, Abstract
Whilst the topic of representations is one of the key topics in philosophy of mind, it has only occasionally been noted that representations and representational features may be gradual. Apart from vague allusions, little has been said on what representational gradation amounts to and why it could be explanatorily useful. The aim of this paper is to provide a novel take on gradation of representational features within the neuroscientific framework of predictive processing. More specifically, we provide a gradual account of two features of structural representations: structural similarity and decoupling. We argue that structural similarity can be analysed in terms of two dimensions: number of preserved relations and state space granularity. Both dimensions can take on different values and hence render structural similarity gradual. We further argue that decoupling is gradual in two ways. First, we show that different brain areas are involved in decoupled cognitive processes to a greater or lesser degree depending on the cause (internal or external) of their activity. Second, and more importantly, we show that the degree of decoupling can be further regulated in some brain areas through precision weighting of prediction error. We lastly argue that gradation of decoupling (via precision weighting) and gradation of structural similarity (via state space granularity) are conducive to behavioural success.
Correction to: What Might Machines Mean?
Green M., Michel J.G.
Q1
Springer Nature
Minds and Machines, 2022, цитирований: 0, doi.org
Minds and Machines Special Issue: Machine Learning: Prediction Without Explanation?
Boge F.J., Grünke P., Hillerbrand R.
Q1
Springer Nature
Minds and Machines, 2022, цитирований: 4, doi.org
Scientific Exploration and Explainable Artificial Intelligence
Zednik C., Boelsen H.
Q1
Springer Nature
Minds and Machines, 2022, цитирований: 32, doi.org, Abstract
Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
Cobalt Бета
ru en