About the project
Why Anasa?
This research project delves into the processes underpinning our decision to engage with Anasa, a conversational AI designed to engage in psychologically informed interactions. It seeks to elucidate the factors influencing our determination to pursue or discontinue interaction with Anasa, thereby shaping our continued utilisation of her assistance. This investigation aims to enhance our understanding of the dynamics at play when choosing to seek aid from Anasa, emphasising the importance of the initial engagement decision in the context of digital interventions.
The primary aim of this research is to elucidate the processes through which individuals discern and establish their propensity for sustained interaction with Anasa.
The premise of the project is founded on the hypothesis that without a conscious decision to persist in these engagements, individuals are less likely to seek out or continue availing themselves of the assistance offered by Anasa. This exploration is crucial for understanding the factors that contribute to the adoption and long-term use of AI in psychological support contexts, thereby informing the development of more effective digital therapeutic tools and interventions.
If this exploration piques your interest, we encourage you to remain on the website and delve deeper into the specifics of this project by reading the information provided below.
With warm regards,
George Koukidis
The advent of artificial intelligence (AI) agents as conversational partners for individuals seeking psychological engagement reflects a novel intersection of technology and therapeutic interaction.
Despite this enthusiasm for leveraging technology in psychological interventions, there exists a palpable apprehension toward the integration of artificial intelligence (AI) within domains traditionally governed by human insight, such as psychology and philosophy, as well as the broader arts and sciences.
Despite this enthusiasm for leveraging technology in psychological interventions, there exists a palpable apprehension toward the integration of artificial intelligence (AI) within domains traditionally governed by human insight, such as psychology and philosophy, as well as the broader arts and sciences. This hesitancy is underpinned by a rigorous exploration of the ethical, philosophical, and practical implications of substituting human interactions with AI-driven mechanisms. Scholars within these fields critically examine the nuances of human cognition, emotion, and the therapeutic relationship, questioning the capacity of AI to authentically replicate these complex dynamics. The concerns extend beyond the technical competencies of AI, delving into the moral and existential ramifications of entrusting aspects of human psychological and philosophical inquiry to artificial entities. This ongoing discourse reflects a broader contemplation of the role of technology in human life, scrutinising the balance between technological advancement and the preservation of intrinsic human values and connections.
In the domain of interpersonal communication, particularly concerning the sharing of personal matters, there seems to exist a traditional preference for face-to-face interactions. However, this predilection is subject to debate, as numerous individuals exhibit comfort with alternative forms of communication, such as telephone or video calls, letters, or email.
Similarly, while there is a general inclination to seek interaction with another human being, this too is challenged by the comfort some find in engaging with pets or inanimate entities.
This divergence of preferences extends into the realm of psychological support and interventions. Direct, in-person interactions enhance communication and foster the formation of a therapeutic alliance and relationship. From this perspective, the utilisation of computerised means for conducting psychological discussions appears to be less suitable. However, there are documented instances of significant therapeutic success achieved through the incorporation of such means.
The integration of artificial intelligence (AI) agents in providing conversational support for individuals seeking psychological engagement further complicates the discourse. Beyond the concerns regarding AI's potential for dominance or existential threats to humanity, apprehensions persist with reference to providing “convenience therapy” and the adoption of superficial solutions. These concerns encompass doubts about the efficacy of AI in genuinely comprehending and delivering psychotherapeutic interventions, as the replication of therapeutic interaction, regardless of its accuracy, does not equate to genuine therapeutic engagement.
Yet, there are reports of individuals feeling that AI systems provide adequate service (e.g. AI “checks on me more than my friends and family”: https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing).
The discussion above holds particular relevance to my professional and academic journey, transitioning from a background in software engineering to the study of psychology, driven by a conviction in the enhancement of life through automation and technological innovation, including information technology and AI. This belief has been a cornerstone of my career, firstly as a software engineer and currently in my role as an operational manager in healthcare.
A significant challenge within my current professional context involves addressing lengthy waiting lists for psychological services, and providing supportive, psychologically informed conversations and skill-based interventions to those awaiting formal therapeutic engagement, sometimes extending beyond a year. Acknowledging the hesitation surrounding the utilisation of AI in this capacity, I have dedicated efforts to researching the potential and limitations of AI in fulfilling these needs.
In terms of this research project, despite the readiness of technology, there exists a lag in societal acceptance and readiness to embrace such technologies, a phenomenon I've personally navigated, having utilized cloud technologies and video calling decades before their mainstream acceptance following the advent of smartphones and the COVID-19 pandemic.
This project endeavours to explore the decision-making process behind engaging with 'Anasa,' an AI assistant designed to offer psychologically informed conversations, reflecting on the broader implications of technological integration into psychological practice.
This research endeavour aims to scrutinize the decision-making processes involved in engaging with 'Anasa', an artificial intelligence assistant designed for providing psychologically informed conversations.
To achieve a comprehensive understanding of these processes, the investigation will leverage data garnered from semi-structured interviews. The participants of these interviews will not be patients but rather consultant psychologists, consultant psychiatrists, and consultant general practitioners who are employed in high-security prison settings.
This choice of participants is predicated on their unique insights into the psychological and environmental factors influencing decision-making in highly constrained contexts.
The participants in this study will interact with 'Anasa', not out of a genuine necessity for psychological support, but under the pretence of requiring such assistance. This methodological decision is taken to safeguard the well-being of the participants, precluding the involvement of actual patients and, by extension, individuals incarcerated in prison facilities. This approach ensures that the research does not expose vulnerable populations to potential risks or ethical dilemmas. Instead, it relies on the simulated engagement of professionals within the field, who can provide insightful feedback on the AI assistant's functionality and the hypothetical applicability of such technology in therapeutic contexts. This simulation enables a controlled exploration of the AI's potential impact and utility in providing psychological support, while meticulously avoiding the ethical and logistical complexities associated with involving real patients or prison populations in the research.
Subsequently, the collected data will undergo Interpretative Phenomenological Analysis (IPA), a qualitative research approach emphasizing the detailed examination of personal lived experiences and how individuals make sense of those experiences. IPA is particularly suited for this study as it allows for a nuanced exploration of the complexities surrounding the adoption of AI-assisted psychological interventions, shedding light on the subjective experiences of professionals at the intersection of technology, psychology, and high-security environments. This methodological approach will enable the research to uncover in-depth perspectives on the acceptability, ethical considerations, and potential integration challenges of AI tools like Anasa within therapeutic settings, contributing significantly to the discourse on digital interventions in mental health care.