Student-AI Relationships: The Rise of Artificial Intimacy
This literature review explores the impact of AI tools like ChatGPT on parasocial relationships, focusing on student interactions with ChatGPT. The study discusses the social, psychological, and ethical implications of AI dependency, emphasizing the need to balance AI use and authentic human interactions to avoid over-reliance.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Introduction: Understanding Parasocial Relationships in the Digital Era
In today’s digital age, where influencers and celebrities are increasingly visible, and social media continuously offers access to their lives, the phenomenon of parasocial relationships is widespread. Parasocial relationships traditionally refer to one-sided connections where individuals feel a sense of intimacy or closeness with media figures through mediated communication (Bahmanmirza, 2022). With the rise of social media, interactivity – such as through comments – has somewhat increased. However, the rise of interactive AI like ChatGPT has created a situation whereusers can actually interact with the entity with which they experience a parasocial relationship. This means that the rise of artificial intelligence has added a new dynamic to parasocial relationships.
AI tools like ChatGPT can respond empathetically to social problems or socially related questions. In this way, ChatGPT can, for example, support people socially and reduce feelings of loneliness (Alzyoudi et al., 2024). Using ChatGPT for social purposes is also logical for students, as this AI tool is the most widely used chatbot among students (Euronews, 2024). When students find ChatGPT effective for educational purposes, it's natural for them to explore its potential for social applications as well, rather than seeking out a different tool. However, as artificial intelligence like ChatGPT is not a person and lacks a human appearance, this raises questions such as: To what extent can the relationship between students and an interactive AI tool like ChatGPT be classified as a new form of parasocial relationship? And how does this relationship influence their social interactions and perception of authenticity?
Back to topThe Rise of AI: Redefining Parasocial Relationships
Although ChatGPT is an advanced language model without self-awareness or emotions, I often observe students addressing ChatGPT with human language like “Hello, how are you?” and “Thank you”. This points to anthropomorphism, or the tendency of users to assign human characteristics to non-human entities by projecting human language onto AI (Salles et al., 2020). The availability and responsiveness of AI create a unique situation in the history of parasocial relationships. This continuous availability emphasizes the idea of “immediacy,” where users can talk at all times about numerous subjects and thus feel as if the psychological distance is small (Kornbluh, 2024). This feeling may lead to a perception that AI tools like ChatGPT offer more helpful advice than a friend (Ramlatchan, 2020). This raises the question: Does the bond between the user and ChatGPT resemble a friendship more than a traditional parasocial relationship?
The bond between the user and ChatGPT points to a complex but one-sided relationship that creates the illusion of understanding and support, just like in parasocial relationships.
To answer the abovementioned question, it is essential to analyze this relationship further. The theory behind parasocial relationships emphasizes that users feel a sense of connection without a true, reciprocal interaction (Horton & Wohl, 1956). However, with ChatGPT, this appears to change partly due to the illusion of “perceived reciprocity,” the idea that AI understands the user and responds like a human would. In reality, this “perceived reciprocity” is an illusion, as the AI bases its responses solely on statistical patterns, without emotional engagement, self-awareness, or a moral compass. According to research by Wynsberghe (2022), an AI tool like ChatGPT can mislead users into “perceived reciprocity” by using empathetic language and creating the impression that it understands the user. This shows how this implementation is driven by anthropomorphismwhich is amplified in AI interactions by the natural language interface and contextually relevant responses (Ho et al., 2018). This projection can trigger a powerful emotional response, makingstudents perceive AI as a friend or confidant.
This subtle yet profound illusion of reciprocity has significant consequences. Unlike human social relationships, where empathy and emotional involvement are mutual, ChatGPT lacks any real emotional foundation due to its algorithmic structure (Coeckelbergh, 2010). However, for users, the responses of ChatGPT can feel like genuine support, potentially affecting their need for authentic human interaction. But, due to the lack of mutual emotional engagement and personal experience, the bond between theuser and ChatGPT cannot be identified as friendship. Rather, it points to a complex but one-sided relationship that creates the illusion of understanding and support, just like in parasocial relationships.
Even though students' connectionwith AItools can be a form of an illusion-created parasocial relationship, the interactivity that ChatGPT offers has more significantpsychological and social consequences than traditional parasocial relationships. AI's continuous availability and extensive knowledge base can tempt users to seek advice or support from AI rather than friends, family, doctors, or teachers (Griffith et al., 2020). This constant accessibility and the speed at which AI responds may even lead students to consider the support of friends or fellow students as unnecessary or inferior (Crawford et al., 2024). Thus, ChatGPT functions not only as a resource but also fulfils roles traditionally associated with friends, such as providing advice or moral support. This dependency on AI could affect decision-making processes and their ability to think independently. In the long term, this may even make it difficult for individuals to establish or maintain real friendships (Čekić, 2024).
In a world created like this, where AI is increasingly seen as a reliable source of information, the distinction between real and unreal becomes increasingly blurred. ChatGPT and similar technologies can contribute to a “post-truth” reality, where objective facts are overshadowed by public opinion, emotions and beliefs (Brahms, 2022). ChatGPT presents information in a way that seems authentic and reliable, without the nuance of human experience or responsibility because of its probability calculation foundation. This may lead to a situation in which users see AI as a reliable source without fully understanding its limitations (Araujo, 2020). This apparent authenticity of AI responses can influence the users’ perception of truth. Users may be inclined to consider the output of AI as factually correct, even when it is not. This can contribute to the spread of misinformation and undermine critical thinking (Hancock & Minner, 2018). Moreover, the increasing use of ChatGPT reinforces the risk of users increasingly relying on an entity without a moral compass, leading to a reality in which factual accuracy becomes less important than the appearance of reliability (Prunkl, 2024). This raises the question of how we can use technology like ChatGPT without disrupting our social values and perception of authenticity.
Back to topEthical Challenges: Dependency and Privacy
In addition to these questions of authenticity and friendship, there is also an important ethical dimension to the increasing dependence on AI. The constant use of ChatGPT can create a sense of mental dependency, reducing the development of students' critical thinking skills (Fossa, 2024). Research shows that constantly available technologies thatprovide quick, usable answers can lead users to be less inclined to self-reflect or make independent decisions (Zhai et al., 2021). This idea also applies to ChatGPT, as the model sends out a sense of reliability through its language use and extensive knowledge. Although AI advice can often be helpful, the lack of moral or emotional insight remains a limitation that students should be aware of.
In a world where AI is increasingly seen as a reliable source of information, the distinction between real and unreal becomes increasingly blurred.
The ethical issues surrounding the use of AI go beyond dependency and lack of authenticity. Privacy concerns are also significant, as interactions with AI are often stored and may be used for further system development (Gupta et al., 2023). Users should be aware that their conversations with ChatGPT are not without privacy consequences, especially when viewing ChatGPT as a friend or parasocial relationship and sharing intimate information. In addition, there are broader ethical implications for the future, where the tendency to attribute human characteristics to AI models may increase. As students feel increasingly connected to AI, this could shift how they value social relationships and interactions in daily life (Crawford et al., 2024). This raises the question of whether we as a society have sufficient ethical guidelines to set boundaries on these relationships and protect users and society from the potentially negative impact of intensive AI interaction on fundamental social interactions.
Back to topConclusion: Striking a Balance Between Technology and Humanity
AI models are evolving every second, so there is no going back. If we want to prevent escalation, we must act now, given these tools' ongoing development and progress. Perhaps the language used by AI models could be adjusted to come across as less personal and empathetic – if it is not already too late. Adopting a more neutral tone and using fewer human expressions could reducethe illusion of emotional reciprocity. This adjustment could help make AI experiences more business-like and clarify the boundary between functional support and the illusion of a parasocial relationship. Moreover, it is essential in a time when artificial intelligence is reshaping our interactions and relationships to critically consider the expectations and use of these technologies. What are the boundaries between a helpful AI assistant and an emotional confidant? How can we maintain a balance so that technology supports us without replacing human contact? These questions emphasize the need for awareness and caution when developing future AI systems. It is important to create societal awareness not only of the benefits but also of the disadvantages and dangers that AI tools pose to our society. In this way, we would not unintentionally fall prey to the negative effects of such tools and our society would remain a social environment with not only autonomously thinking tools but also autonomously thinking people.
Back to topReferences
Alzyoudi, M., & Al Mazroui, K. (2024). ChatGPT as a coping mechanism for social isolation: An analysis of user experiences and perceptions of social support. Online Journal of Communication and Media Technologies, 14(3).
Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611-623.
Bahmanmirza, M., Seyyedamiri, N., & Hajiheydari, N. (2022). Designing a para-social relationships Framework of Instagram influencers based on influencer marketing: a bibliometric approach.
Brahms, Y. (2022). Philosophy of post-truth. Institute for National Security Studies (INSS).
Čekić, E. (2024). Effects of artificial intelligence on psychological health and social interaction. International Journal of Science Academic Research, 5(10), 8424-8431.
Coeckelbergh, M. (2010). Moral appearances: emotions, robots, and human morality. Ethics and Information Technology, 12, 235-241.
Crawford, J., Allen, K. A., Pani, B., & Cowling, M. (2024). When artificial intelligence substitutes humans in higher education: the cost of loneliness, student success, and retention. Studies in Higher Education, 49(5), 883-897.
Euronews (2024). OpenAI's ChatGPT chatbot tops the list but these are the 9 other most popular AI tools just now. Consulted on December 10, 2024, from
Fossa, F. (2024). Artificial intelligence and human autonomy: the case of driving automation. AI & SOCIETY, 1-12.
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access.
Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712-733.
Horton, D., & Wohl, R. (1956). Mass Communication and Para-Social Interaction, Psychiatry, 19:3, 215-229.
Kornbluh, A. (2024). Immediacy: Or, The style of too late capitalism. Verso Books.
Prunkl, C. (2024). Human Autonomy at Risk? An Analysis of the Challenges from AI. Minds and Machines, 34(3), 26.
Ramlatchan, M., & Watson, G. S. (2020). Enhancing instructor credibility and immediacy in online multimedia designs. Educational Technology Research and Development, 68(1), 511-528.
Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88-95.
V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., ... & Loggarakis, A. (2020). User experiences of social support from companion chatbots in everyday contexts: thematic analysis. Journal of Medical Internet Research, 22(3).
Wynsberghe, A. (2022). Social robots and the risks to reciprocity. AI & SOCIETY, 37(2), 479-485.
Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., ... & Li, Y. (2021). A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity, 2021(1), 8812542.
Back to top