Application of Machine Learning Algorithms Based on Islamic Moral Values for Mitigating Negative Content on Digital Media

Main Article Content

Moch Nurcholis Majid
Muhammad Suhaili

Abstract

Information bias in digital media has evolved into a systemic problem driven by algorithmic designs that prioritize user engagement optimization without adequate value orientation. This condition has contributed to polarization, amplification of sensational content, and unequal information representation. This study aimed to reconstruct data science algorithms based on Prophetic Ethics as a normative-operational framework to mitigate information bias in digital media. The research employed a descriptive qualitative approach with a systematic meta-analysis of scholarly publications from 2016 to 2025 retrieved from reputable academic databases. Data were analyzed using content analysis and thematic analysis to synthesize patterns of algorithmic bias and formulate a reconstruction model grounded in the principles of ṣidq (truthfulness), amānah (trustworthiness), tablīgh (transparency), and faṭānah (wisdom). The findings indicated that algorithmic bias occurred across all stages of the data processing pipeline and required structural reconstruction rather than partial technical adjustments. The proposed model integrated data validation, accountable governance, transparent modeling, and social welfare optimization within recommendation system design. This study contributed to the development of a value-oriented data science paradigm and expanded the discourse on algorithmic ethics in contemporary digital societies.

Downloads

Download data is not yet available.

Article Details

How to Cite
Majid, M. N., & Muhammad Suhaili. (2025). Application of Machine Learning Algorithms Based on Islamic Moral Values for Mitigating Negative Content on Digital Media. Khazanah: Journal of Islamic Education and Science, 1(2), 32–47. https://doi.org/10.61815/khazanah.v1i2.871
Section
Articles

References

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. fairmlbook.org

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 149–159.

Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1

Huda, M., Jasmi, K. A., Mustari, M. I., & Basiron, B. (2018). Understanding divine pedagogy in teacher education: Insights from prophetic leadership. Journal of Education and Practice, 9(2), 42–52.

Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2018). Inherent trade-offs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science (ITCS), 43:1–43:23. https://doi.org/10.4230/LIPIcs.ITCS.2017.43

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607

Narayanan, A. (2018). Translation tutorial: 21 fairness definitions and their politics. Proceedings of the Conference on Fairness, Accountability, and Transparency, 1–5.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598

Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., … Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Political Science Quarterly, 133(2), 1–65.

Veale, M., & Borgesius, F. Z. (2021). Demystifying the draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Similar Articles

1 2 > >> 

You may also start an advanced similarity search for this article.