Please use this identifier to cite or link to this item: http://dspace.azti.es/handle/24689/2653
Files in This Item:
There are no files associated with this item.
Title: Enhancing social science research on cyberbullying through human machine collaboration
Authors: Banos-Ramos, Andrea; Reneses, Maria; Perez, Jaime; Valverde, Gabriel; Awad, Edmond; Lopez, Gregorio Lopez; Castro, Mario
Citation: Scientific reports, 2025, 15, 32954-32954
Abstract: Cyberbullying (CB) has emerged as a growing concern among adolescents, with nearly 10% of European children affected monthly and almost half experiencing it at least once. Unlike traditional bullying, CB thrives in digital environments where anonymity and impunity are prevalent. Despite its increasing prevalence, understanding the causal mechanisms behind CB remains challenging due to the limitations of conventional statistical methods, which often rely on correlations and are prone to spurious associations. In this paper, we introduce a novel human-machine consensus framework for causal discovery, aimed at supporting social scientists in unraveling the complex dynamics of CB. We leverage recent advances in data-driven causal inference, particularly the use of Directed Acyclic Graphs (DAGs), to identify and interpret causal relationships from observational data. Our approach integrates automatic causal discovery algorithms with expert knowledge, addressing the limitations of both purely algorithmic and purely expert-driven methods, and allows for the creation of a model ensemble estimation of the causal effects. To enhance interpretability and usability, we advocate for the use of Probabilistic Graphical Causal Models (PGCMs), or Bayesian Networks, which combine probabilistic reasoning with graphical representation. This hybrid methodology not only mitigates cognitive biases and inconsistencies in expert input but also fosters transparency and critical reflection in model construction. Cyberbullying serves as a compelling case study where ethical constraints preclude experimental designs, highlighting the value of interpretable, expert-informed causal models for guiding policy and intervention strategies.
Issue Date: 2025
Type: Journal Article
DOI: 10.1038/s41598-025-16149-4
URI: http://dspace.azti.es/handle/24689/2653
Appears in Publication types:Artículos científicos



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.