Articles | Open Access | Vol. 6 No. 02 (2026): Volume 06 Issue 02 | DOI: https://doi.org/10.37547/marketing-fmmej-06-02-03

Algorithmic Trust in Interactive Marketing: A Conceptual Framework for Consumer Responses to AI-Generated Persuasion

Naman Modi , Department of Marketing California State Polytechnic University, Pomona

Abstract

The rapid diffusion of generative artificial intelligence (AI) in marketing has transformed how firms design, personalize, and disseminate persuasive messages across global markets. However, limited research has theorized how consumers develop trust in AI as an active source of marketing communication rather than as a background analytical infrastructure. This study introduces algorithmic trust in interactive marketing as a multidimensional construct that captures cognitive, emotional, ethical, and institutional confidence in AI-generated marketing messages and the socio-technical systems that produce and govern them. Drawing on trust theory, technology acceptance research, source credibility theory, and cross-cultural institutional frameworks, we developed a conceptual model explaining how cultural values, regulatory environments, and transparency strategies shape the psychological mechanisms underlying trust formation and its behavioral outcomes. The framework advances interactive marketing theory by repositioning AI as a relational actor in consumer–brand communication. We derive six theoretically grounded propositions and outline a research agenda to guide empirical investigations into transparency, governance, and human–AI interaction in marketing contexts.

Keywords

algorithmic trust, interactive marketing, rtificial intelligence, AI governance, algorithmic persuasion, transparency, cross-cultural consumer behavior

References

Aguirre, E., Mahr, D., Grewal, D., de Ruyter, K., & Wetzels, M. (2015). Unraveling the personalization paradox: The effect of information collection and trust-building strategies on online advertisement effectiveness. Journal of Retailing, 91(1), 34–49.

Buell, R. W., & Norton, M. I. (2011). The labor illusion: How operational transparency increases perceived value. Management Science, 57(9), 1564–1579.

Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24–42.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.

Delgado-Ballester, E., & Munuera-Alemán, J. L. (2005). Does brand trust matter to brand equity? Journal of Product & Brand Management, 14(3), 187–196.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126.

Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90.

Hofstede, G. (2001). Culture's consequences: Comparing values, behaviors, institutions, and organizations across nations (2nd ed.). Sage.

Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion: Psychological studies of opinion change. Yale University Press.

Huang, M. H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49(1), 30–50.

Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38–56.

Kim, T. W., & Duhachek, A. (2020). Artificial intelligence and persuasion: A construal-level account. Psychological Science, 31(4), 363–380.

Kirk, C. P., & Givi, J. (2025). The impact of AI disclosure on consumer trust and purchase intentions. Journal of Business Research, 186, 114984.

Lankton, N. K., McKnight, D. H., & Tripp, J. F. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 880–918.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.

Loewenstein, G., Sunstein, C. R., & Golman, R. (2014). Disclosure: Psychology changes everything. Annual Review of Economics, 6(1), 391–419.

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.

Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.

López, A., & Garza, R. R. (2023). Consumer responses to AI-powered personalization: The role of perceived autonomy. Journal of Research in Interactive Marketing, 17(6), 831–847.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.

McKnight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1–25.

Morgan, R. M., & Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. Journal of Marketing, 58(3), 20–38.

Ohanian, R. (1990). Construction and validation of a scale to measure celebrity endorsers' perceived expertise, trustworthiness, and attractiveness. Journal of Advertising, 19(3), 39–52.

Palmatier, R. W., Dant, R. P., Grewal, D., & Evans, K. R. (2006). Factors influencing the effectiveness of relationship marketing: A meta-analysis. Journal of Marketing, 70(4), 136–153.

Pavlou, P. A., & Gefen, D. (2004). Building effective online marketplaces with institution-based trust. Information Systems Research, 15(1), 37–59.

Qu, Y., & Baek, E. (2023). The impact of AI service agents on consumer engagement: A psychological ownership perspective. Journal of Research in Interactive Marketing, 17(3), 354–373.

Schilke, O., & Reimann, M. (2025). Trust in artificial intelligence: A multi-stakeholder perspective. Organizational Behavior and Human Decision Processes, 188, 104405.

Schwartz, S. H. (2006). A theory of cultural value orientations: Explication and applications. Comparative Sociology, 5(2-3), 137–182.

Scott, W. R. (2014). Institutions and organizations: Ideas, interests, and identities (4th ed.). Sage.

Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73–100). MIT Press.

Sundar, S. S., & Nass, C. (2001). Conceptualizing sources in online news. Journal of Communication, 51(1), 52–72.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117.

Article Statistics

Downloads

Download data is not yet available.

Copyright License

Download Citations

How to Cite

Modi , N. (2026). Algorithmic Trust in Interactive Marketing: A Conceptual Framework for Consumer Responses to AI-Generated Persuasion. Frontline Marketing, Management and Economics Journal, 6(02). https://doi.org/10.37547/marketing-fmmej-06-02-03