Abstract
Aim: In today’s digital society, the growing problem of disinformation, which spreads faster than ever before through social media, poses a serious challenge. The aim of the study was to develop a method for protecting users against fake news by combining artificial intelligence solutions with human knowledge and experience. The authors assumed that effective counteraction against disinformation requires not only advanced technologies but also active involvement of users, who are able to detect subtleties that might escape automated systems.
Project and methods: As part of the project, a system was developed to create a database of disinformation articles. An initial verification of the content was carried out by experts, followed by user evaluations, whose responses were used to train machine learning classification models. Feedforward neural networks were applied and trained on data enriched with text complexity indicators such as the Pisarek index, FOG index, and Flesch readability scores. An important element of the system was also the profiling of users based on their knowledge and competencies, which allowed for more accurate matching of the content being evaluated.
Results: The results indicated that the neural network achieved a classification accuracy of 84%, which was considered a very good result given the complexity of detecting disinformation. Qualitative analysis revealed that fake news is characterized by its ubiquity, simplicity of message, and emotional appeal, making it exceptionally effective at spreading. The research also showed that even highly knowledgeable users are not completely immune to manipulation, particularly when emotionally charged social events are involved. It was confirmed that fake news most often concerns political, economic, and social topics and that its goals are both to achieve political influence and economic gain.
Conclusions: The conclusions drawn from the study indicate that complete protection against disinformation is impossible; however, it is possible to mitigate its effects through appropriate user education and technological development. Media literacy and the development of critical content analysis skills are essential to strengthening society’s resilience to false information. The hybrid approach, combining artificial intelligence with human expertise, proved to be an effective solution, although it requires continuous improvement of models and analytical methods. The article emphasizes the necessity for further research in this area and the importance of a systemic approach combining technology, education, and social responsibility.
Keywords: machine learning, disinformation, fake news, artificial intelligence, user protection
Type of article: original scientific article
Bibliography:
- Allcott H., Gentzkow M., Social Media and Fake News in the 2016 Election, „Journal of Economic Perspectives” 2017, 31(2), 211–236, https://doi.org/10.1257/jep.31.2.211.
- Vosoughi S., Roy D., Aral S., The spread of true and false news online, „Science” 2018, 359(6380), 1146–1151, https://doi.org/10.1126/science.aap9559.
- Shu K., Sliva A., Wang S., Tang J., Liu H.,Fake News Detection on Social Media: A Data Mining Perspective, „ACM SIGKDD Explorations Newsletter” 2017, 19(1), 22–36, https://doi.org/10.1145/3137597.3137600.
- Goodfellow I., Bengio Y., Courville A., Deep Learning, MIT Press 2016.
- Gelfert A., Fake News: A Definition, „Informal Logic” 2018, 38(1), 84–117, https://doi.org/10.22329/IL.V38I1.5068.
- Morelli M., Grignolio A., Tamietto M., Why is fake news so fascinating to the brain?, „European Journal of Neuroscience” 2022, 56, 6961–5971, https://doi.org/10.1111/ejn.15844.
- Theofilos M., Disinformation: more than fake news, „Moldoscopie” 2022, 2(97), 145–152, https://doi.org/10.52388/1812-2566.2022.2(97).14.
- Araujo J., Barredo‑Ibáñez D., Wihbey J., Beyond Fake News and Fact‑Checking: A Special Issue to Understand the Political, Social and Technological Consequences of the Battle against Misinformation and Disinformation, „Journalism and Media” 2022, 3(2), 254–256, https://doi.org/10.3390/journalmedia3020019.
- Hamilton K., Towards an Ontology for Propaganda Detection in News Articles, „The Semantic Web: ESWC 2021 Satellite Events – Lecture Notes in Computer Science” 2021, 12739, 230–241, https://doi.org/10.1007/978-3-030-80418-3_35.
- Boyd Barrett O., Fake news and ‘RussiaGate’ discourses: Propaganda in the post truth era, „Journalism” 2019, 20(1), 87–91, https://doi.org/10.1177/1464884918788323.
- Zhou X., Zafarani R., A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities, „ACM Computing Surveys (CSUR)” 2020, 53(5), 1–40, https://doi.org/10.1145/3395046.
- Floridi L., Fake news and a 400 year old problem: We need to resolve the “epistemic crisis, “The Guardian” 2016.
- Gorwa R., Guilbeault D., Unpacking the Social Media Bot: A Typology to Guide Research and Policy, „Policy & Internet” 2020, 12(2), 225–248, https://doi.org/10.1002/poi3.210.
- Pennycook G., Rand D.G., The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings, „Management Science” 2019, 66(11), 4944–4957, https://doi.org/10.1287/mnsc.2019.3478.
- Nguyen V.-T., Sugiyama K., Nakov P., Kan M.-Y., FANG: Leveraging Social Context for Fake News Detection Using Graph Representation, w: Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM) 2020, 1165–1174, https://doi.org/10.1145/3340531.3411981.
- Pennycook G., Rand D.G., Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning, „Cognition” 2019, 188, 39–50, https://doi.org/10.1016/j.cognition.2018.06.011.
- Lewandowsky S., Ecker U.K.H., Cook J., Beyond misinformation: Understanding and coping with the “post truth” era, „Journal of Applied Research in Memory and Cognition” 2017, 6(4), 353–369, https://doi.org/10.1016/j.jarmac.2017.07.008.
- Flynn D.J., Nyhan B., Reifler J., The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics, „Political Psychology” 2017, 38(S1), 127–150, https://doi.org/10.1111/pops.12394.
- Roozenbeek J., van der Linden S., The fake news game: actively inoculating against the risk of misinformation, „Journal of Risk Research” 2018, 22(5), 570–580, https://doi.org/10.1080/13669877.2018.1443491.
- Pennycook G., McPhetres J., Zhang Y., Lu J.G., Rand D.G., Fighting COVID 19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy Nudge Intervention, „Psychological Science” 2020, 31(7), 770–780, https://doi.org/10.1177/0956797620939054.
- Shao C., Ciampaglia G., Flammini A., Menczer F., Hoaxy: A Platform for Tracking Online Misinformation, w: WWW '16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web 2016, 745–750, https://doi.org/10.1145/2872518.2890098.
- Shu K., Mahudeswaran D., Wang S., Lee D., Liu H., FakeNewsNet: A Data Repository with News Content, Social Context and Spatial temporal Information for Studying Fake News on Social Media, arXiv 2018, https://doi.org/10.48550/arXiv.1809.01286.