Artikulu honek teknologiak osasun mentalerako izan ditzakeen onurak eta arriskuak aztertzen ditu. Teknologia gure osasun mentala hobetzeko erabil daiteke, baina erabilera desegokiak arriskuak dakartza osasun mentalerako zein datuen pribatutasunerako. Herritar guztientzako osasun mentala sustatzeko berrikuntza teknologikoaren eta pertsona ardatz duen osasun mentalaren artean orekari eustea, baita osasun sistema publikoan psikiatra eta psikologoen kopurua handitzea ere beharrezkoa dela ondorioztatzen da.
Technology and Mental Health: Benefits and Risks
Technology and Mental Health: Benefits and Risks
1. Introduction
Mental health has been a little addressed topic in our society. Very recently, driven by the COVID-19 pandemic, the lockdown and the consequences derived from these events, it has begun to be given more importance, but this has not led to an increase in the resources allocated to mental health, neither for prevention nor for treatment. Before the COVID-19 pandemic, just a small fraction of people in need had access to effective, affordable and quality mental health care, and the situation remains similar today. For example, according to the World Health Organization (2022), minimally-adequate treatment for depression is estimated to range from 23% in high-income countries to 3% in low-middle-income countries.
Numerous resources are needed to alleviate the global mental health problems. There is no doubt that it is necessary to increase the number of psychiatrists and psychologists in public health and the support network that can follow up all the people who need mental health care. But technology could also be a resource for improving the well-being of citizens. In fact, technology has played a major role in shaping the way we live and interact with each other. With the advancement of technology, people have access to more information, better communication, and increased convenience in their daily lives. Furthermore, technology can be used as a tool to improve mental health through resources such as Artificial Intelligence powered chatbots, Interpersonal Communication Technology interventions, mental health apps, online therapy, virtual and augmented realities, and digital phenotyping of mental health. These and other examples of Large Techno Social Systems are already part of our lives but in addition to the positive effects mentioned above, they may have negative effects. In fact, the increased reliance on technology has also brought new challenges to our mental health. Studies have shown that excessive and inappropriate use of technology can lead to issues such as anxiety, depression, sleep disturbances, self-harming and suicidal thoughts and other problems that may reduce remarkably our quality of life. Furthermore, it is essential to consider the potential data privacy risks associated with using technology to assist individuals with mental health conditions.
Considering that technology will continue to play an essential role in our lives, this article aims to analyze both the benefits and the dangers that, in relation to mental health, entail the accelerated development of technologies that we are currently experiencing. When carrying out this analysis, we will try to address the challenges brought about by this vertiginous technological advance, and we will end the article with a section of conclusions that are actually intended to be ideas for reflection.
2. Technological resources for improving mental health
2.1. Users-focused tools
2.1.1. Artificial Intelligence powered chatbots
Chatbots (e.g. Woebot, Wisa, Youper, etc.) are one of the most widely used mobile applications for providing emotional support and helping users manage their mental health (Abd-Alrazaq et al., 2019). They are computer programs based on natural language processing and machine-learning algorithms, capable of conversing and interacting with humans through spoken, written or visual language (Vaidyam et al., 2019). These new systems, which have experienced an exponential increase over the last few decades, facilitate interaction with people who are reluctant to seek psychological help because of the stigma that still entails today or because they cannot afford the cost (Bickmore et al., 2010; Vaidyam et al., 2019). Although they are devices primarily intended for the user himself and there is still insufficient evidence to support their effectiveness and safety (Abd-Alrazaq et al., 2020), chatbots are being used in cognitive behavioral therapy (Oh et al., 2020), and suicide prevention (Martinengo et al., 2022). On this last case, a protocol is triggered when a user reports having suicidal thoughts using explicit language such as the verbs “want” or “have to”, which conveys a sense of immediately acting on those thoughts. These chatbots function was analyzed evaluating the content of dialogues between standardized users (not real-world users but users constructed according to global demographic and risk factor profiles of people with depression) and chatbots included in systematic assessments of depression and mental health apps intended for consumers. It must be taken into account that although these applications could be useful for the detection of suicidal thoughts it is crucial to consider that such thoughts pose a significant risk of severe harm and death. Therefore, users who are at risk of suicide should not be managed solely by a conversational agent (Martinengo et al., 2022).
Through this new technology, a user can establish a therapeutic relationship with a robot. Therefore, considering that the establishment of an alliance between patient and therapist is one of the predictors of therapeutic outcomes in traditional therapy, one of the concerns is the type of alliance that could be established and its potential impact on the user’s mental health (Scholten et al., 2017). Some of the few studies examining this issue have found that patients develop “transference” towards chatbots, resulting in an unconscious redirection of feelings towards them (Bickmore et al., 2010; Scholten et al., 2017). For this reason, attention should also be paid to the dependency relationship that the user may develop with the chatbot. Furthermore, the inability to access the program could lead to distress, and the uncontrolled use of the chatbot could reduce the user’s personal relationships (Vaidyam et al., 2019).
Apart from the need to prevent inappropriate relationships between the users and the chatbots, there is another major challenge associated with the use of this tool: privacy and confidentiality. Although we will go more deeply into the ethical and privacy issues associated with the use of technology later on, it is worth noting here that most chatbots are connected to the internet or even to social networks, meaning that users could be sharing a significant amount of personal information without being aware of it (Vaidyam et al., 2019).
2.1.2. Mobile devices providing self-help
A study conducted two years ago found more than 350 applications (e.g. Talkspace, I am sober, Calm, etc.) that have been included in the Apple Store and Google Play which aim to provide users with resources, support, and tools for managing their mental health (Aitken & Nass, 2021). The majority of them are used for depression, anxiety, self-injurious thoughts and behaviors, substance use disorders, posttraumatic stress disorder and sleep problems. Among the potential benefits attributed to such devices, some studies point out that they are highly accessible and flexible, can be used while preserving the user’s anonymity, and reach people who would likely not seek professional advice (Ebert et al., 2018). Additionally, other authors attribute to them the advantage of being used in large-scale interventions in settings where economic constraints make it difficult for the population to access mental health resources (Chisholm et al., 2016). The fact that their use is widespread among the population is another advantage of these devices (The Lancet & Child Adolescent Health, 2018). However, as we will address later, the elderly population faces significant difficulties in using this technology, and this is a vulnerable population that requires special attention.
Despite the potential benefits of mobile apps for mental health, the lack of generalizable evidence supporting their efficacy has led to the need to reflect on the potential harm they may cause, as they may keep people in need of psychological treatment away from evidence-based interventions (Weisel et al., 2019).
Although we will not go into this in depth, we would like to point out that among the applications developed to promote mental health would also exist online social platforms. Some of these platforms, also known as Interpersonal Communication Technology Interventions, aim to promote social participation, improve cognitive function, physical activity, healthy eating, and healthy sleep habits. They aim to create social connections, alleviate social isolation and loneliness, and improve the quality of life of users. A recent systematic review by Choi and Lee (2021) found that such interventions are effective improving loneliness, social support, life satisfaction, and other affective responses among older adults. Therefore, we believe that the development of accessible and easy Interpersonal Communication Technology interventions could be a beneficial technological resource for the mental health of the older population.
The chatbots and the mobile devices are primarily aimed at the user and are intended to improve their well-being. However, on the basis of the scientific evidence available to us to date, they should not be used as a substitute for therapy. In fact, some authors have recommended integrating them into a clinical setting in which a professional will be in charge of monitoring the progress and provide additional support (Weisel et al., 2019). In addition to these users-focused tools, technological resources have also been developed to improve the work of therapists. These include online therapies, the integration of virtual reality and augmented reality technology into therapy, and other newly developed technologies that aim to provide real-time patient monitoring.
2.2. Tools for the Therapist
2.2.1. Online Therapy
During the COVID-19 lockdowns, online therapies saw an exponential increase in usage, which continued even after restrictions were lifted. However, online therapies are not a new concept, as the first studies on distance therapies date back several decades. Therapists have been maintaining remote contact with their clients since the early 1920s using more traditional technologies like the telephone or text messaging (Perle et al., 2011). Online therapy has the advantage of allowing users to access interventions from any location and in various formats, which may appeal to those who suffer from stigma (Marcu et al., 2022). A meta-analysis conducted in 2020 concluded that digital psychological interventions, including online therapy, are effective, especially for patients between 20 and 35 years of age with various mental health disorders such as depression, anxiety, post-traumatic stress and substance misuse, and for patients of all ages with depression and substance misuse (Fu et al., 2020). In this last case, online therapy may outperform face-to-face therapy (de Oliveira Christoff & Boerngen-Lacerda, 2015).
However, while the therapeutic alliance does not seem to be affected by the change in format, several studies found that opinions about the effectiveness of online therapies differed between clients and therapists. Clients had a very positive view of online therapy, while therapists had more diverse opinions. Although they acknowledged the potential benefits of online therapy, therapists were often hesitant about its clinical efficacy and expressed concerns about the loss of non-verbal information, such as that derived from body language, which is crucial in therapy (Hertlein et al., 2015).
Online therapy may be useful for the elderly individuals as they often face difficulties accessing mental health services, especially those living in rural areas or geographically areas distant from mental health centers (Douthit et al., 2015; Liu et al., 2006). Prejudices regarding elderly mental health may also lead to a failure in referring them for specialized consultations. The lack of access to mental health services may result in unrecognized depression, behavioral aspects of dementia, and other comorbidities, making it difficult to manage their overall mental health even in the case of those living in nursing homes (Snowdon, 2010). Telepsychogeriatrics, or online therapy for the elderly, could be an effective solution for this vulnerable population. Although few studies have evaluated its effectiveness, some authors suggest that telepsychogeriatrics can be an efficient therapy when implemented appropriately and with the involvement of a caregiver who is familiar with the patient´s needs. This approach can help overcome the barriers to accessing mental health services and improve the overall well-being of elderly individuals (Ramos-Ríos et al., 2012).
Finally, it should be noted that although online therapies are effective and have some advantages over face-to-face therapies, they also have some limitations, and more research is needed to clarify their positive and negative aspects in the field of mental health. Additionally, it is crucial to acknowledge that online therapy does not necessarily address the accessibility gap that exists for those who lack resources to manage mental health issues. For instance, low-income individuals may lack access to the internet and other digital tools.
2.2.2. Virtual Reality and Augmented Reality
Virtual reality (e.g. Amelia Virtual Care) is an application that allows users to navigate and interact with a virtual environment in near real-time (Pratt et al., 1995). Depending on the system and programming, users may interact with the environment from an egocentric or allocentric point of view. In the latter case, the user moves a virtual representation of themselves called an “avatar”. Users can act upon virtual objects and interact with virtual beings such as people or animals. While virtual reality substitutes the existing physical environment with a virtual one, augmented reality uses virtual elements to enhance the existing environment. This means that physical and virtual elements are combined in real time, creating an augmented reality experience (Baus & Bouchard, 2014). Integrating virtual and augmented reality technologies into therapy provide patients with immersive, interactive experiences, which can be particularly beneficial in treating certain pathologies, such as phobias.
The artificial environments generated by virtual and augmented reality are closer to everyday life experiences. That is why virtual environments are often referred to as an “ecological laboratory” where behaviors, emotions, and human experiences can be studied in a controlled and rigorous way (Botella et al., 1998). Virtual and augmented reality have the potential to improve various aspects of existing treatments. Firstly, therapists have complete control over the virtual situations and elements in the computer program, for example, they can chose what and how many stimuli generate, and in which order it will appear. Secondly, patients may feel more secure during therapy because negative outcomes that they fear will not occur in the virtual environment without consent and planning. For instance, a therapist can expose a patient to being on a mountaintop and assure them that they will not fall off. Thirdly, virtual and augmented reality allow for easier access to threatening stimuli, such as spiders (Botella et al., 2017). These three aspects, which are integrated into therapy thanks to technology, contribute to enhancing the effectiveness of virtual and augmented reality in clinical psychology (Ventura et al., 2018).
New technology has also been developed in order to integrate real digital images into virtual environments. At a Facebook conference held a few years ago, Mark Zuckerberg presented a technique that allows showing personal or smartphone pictures in the social virtual world to increase the sense of presence (Rubin, 2016). This technique has the potential to help with disorders such as attention deficit, social phobia, public speaking anxiety, and others. For example, working with photographs of the classmates, or coworkers can help people improve their social skills in a safe virtual environment, which they can then apply in the real world (Botella et al., 2009). However, to the best of our knowledge, there is still no evidence on the implementation of Facebook Virtual Reality in treatments to improve mental health.
2.2.3. Technology for real-time monitoring in mental health
Mental disorders are usually diagnosed on the basis of symptoms, identified through interviews or self-reports, i.e. through patients’ accounts of past experiences. But the timing of the intervention and the patients´ current state can produce biases in their recollections. Therefore, there is an increasing need to develop assessment procedures that allow real-time monitoring of certain parameters related to mental health. The application of this technology to mental health, known as “digital phenotyping of mental health”, consists of collecting a large amount of social (environmental context, sociability), emotional (mood) and behavioral (physical activity, sleep) data through digital devices (smartphones, tablets, computers, fitness trackers, sleep monitors, etc.), social networks and health systems, and once collected, processing and transforming into data that can be analyzed using data mining (Teles et al., 2020; Torous et al., 2016; Tsai et al., 2015).
A recent systematic review (Mendes et al., 2022) showed that an increasing number of studies are using sensory applications (e.g. hap·tic) instead of public data bases to perform digital phenotyping of mental health. The use of digital phenotyping and other technologies for real-time monitoring of certain parameters related to mental health (e.g. Mindcare) is expected to improve the accuracy of diagnosis and lead to more personalized and effective treatments based on individual patient needs.
One of the areas in which these technologies are expected to make important contributions is suicide prevention. Suicide is one of the most serious issues facing the mental health field. In fact, in developed countries it is the most frequent cause of unnatural death and is the second leading cause of death among adolescents and young people. Moreover, far from decreasing, suicide remains one of the leading causes of death worldwide. Every year, more people die as a result of suicide than HIV, malaria or breast cancer (World Health Organization, 2021). For this reason, it can be regarded as a public health concern and its prevention has become an enormously important challenge that is being tackled through the use of these newly developed technologies.
Suicide prevention requires detecting the presence of suicidal ideation or communication, which is very difficult for both primary care physicians and mental health professionals (Nutting et al., 2005). In fact, it has been estimated that up to 66% of people who die by suicide have had contact with a primary health service in the month prior to their death, with their suicidality often going undetected (Mann et al., 2005). Therefore, the potential of the technology to aid in the detection of a suicidal crisis or the onset of a pre-crisis state is very important. With the increasing prevalence of sensor-enabled smartphones and online social networking platforms (e.g. Searching help), there is a growing potential to use technology to help detect the warning signs of suicide and its associated factors. In fact, passive data collection and automatic detection through these technologies could provide a valuable tool for detecting individuals who may be at risk of suicide and intervening early (Larsen et al., 2015). Social withdrawal, for example, is a significant warning sign for depression and suicidal behavior (Van Orden et al., 2010), which may be detectable by members of an individual´s social network. Passive monitoring through smartphones can provide a useful tool to map social networks and identify signs of withdrawal.
Various studies have found that mental health is associated with certain patterns of use and behavior within online social networks (Burke et al., 2010; Larsen et al., 2015). In this regard, it is important to note that two major online social networking platforms, Twitter (Samaritans, 2014) and Facebook (Boyle & Staubli, 2015), have attempted to implement screening tools to identify potential suicidal crises. However, due to the enormous volume of content shared on these platforms, accurately and automatically detecting social media posts that indicate a real risk is challenging. Therefore, machine-learning algorithms have been developed to automatically detect tweets that suggest suicidal behavior. In a research based on Twitter (O’Dea et al., 2015), 14,701 tweets related to suicidal behavior were examined. A sample of 2,000 tweets was manually labeled as “very worrying”, “possibly worrying”, or “safe to ignore”, with 14% being considered very worrying. These labeled tweets were then used to train a support vector machine, which achieved an 80% accuracy rate when applied to a sample of unseen “very worrying” tweets. This accuracy rate is equivalent to the agreement between human coders in the manual labeling process. This study demonstrates the feasibility of developing automated detection systems for suicidal content posted on Twitter and has the potential to be adapted for use on other social networking platforms.
Continuing with the application of these new technologies to suicide prevention, it is worth mentoring the field of writing style (or word usage) and speech production. Related to the first, the writing style can be used to detect suicidal risk. In fact, different studies showed that depressed individual often have an increased focus on themselves where they heavily use first person singular pronouns such as ‘I’ (e.g. ‘I haven’t’, ‘I want’), and ‘myself’ (Chung & Pennebaker, 2007; Mowery et al., 2017; Preotiuc-Pietro et al., 2015;). On the other hand, in addition to detecting changes in the writing style, machine-learning algorithms may also be used to analyze speech production, and speech production analysis has been applied to detect the risk of a suicidal crisis. Speech production is a complex process involving both cognitive and muscular systems, and changes in affective state during a suicidal crisis can produce notable acoustic changes. These changes can be measured through prosodic, articulatory, and acoustic speech features. Prosodic alterations, including unusual vocal patterns and changes in acoustic properties, have been linked to individuals in a suicidal crisis (Hashim et al., 2012). Suicidality has also been linked to shifts in spectral energy, such as from lower to higher frequencies (France et al., 2000; Ozdas et al., 2004), or vice versa (Yingthawornsuk et al., 2007). Some studies have reported statistically significant differences in voice quality features between suicidal adolescent voices and matched controls (Scherer et al., 2013). Despite its potential, however, there is not yet enough scientific evidence to fully support the validity and generalizability of this technological application for the prevention of suicidal behavior.
As highlighted in a recent systematic review on the role of new technologies in suicide prevention (Forte et al., 2021), many of these technologies have been developed for adolescents or young adults, who are more likely to use them. However, suicide is a problem that affects society as a whole, including older adults. In fact, the likelihood for a suicide attempt is two to three times higher for those who are 75 years or older when compared to younger demographic groups (Ojagbemi et al., 2013), underscoring the need for suicide prevention technologies that address this demographic. Additionally, it is important to acknowledge that while technology has the potential to improve suicide prevention, its effectiveness can only be proven if it leads to a reduction in the suicide rate in the future. Moreover, as we have stated before, the digital divide remains a significant issue, as access to these technologies may be limited for some individuals.
In the preceding sections, we discussed the potential benefits of new technologies for mental health also pointing out some of the challenges linked to the use of such technologies. Going one-step beyond the challenges, it is important to acknowledge that they also pose certain risks that must be addressed. In the following paragraphs, we will review some of the most relevant risks that need to be considered.
3. Negative aspects and risks of technology in the mental health field
3.1. Excessive use of the internet, social networking platforms and smartphones
Although excessive internet use it is no yet included among addictive behaviors in the main clinical psychology manuals (DSM-V, ICD-11), there is growing concern about the impact it can have on different groups, especially adolescents and young people.
Terms such as internet addiction, compulsive internet use, internet dependence, and pathological internet use, have been used to refer to maladaptive use of the internet (Spada, 2014; Volpe et al., 2015). However, this conceptualization remains controversial (D’Hondt et al., 2015; Starcevic, 2013) as it over-pathologizes the condition without distinguishing a broader spectrum of behaviors (Billieux et al., 2015). To resolve this controversy, the term “problematic internet use (PIU)” has been proposed, described as a maladaptive pattern of internet use that involves an apparent loss of control over the behavior, the occurrence of psychological, social, or professional negative consequences and worries, and obsessive thoughts when it is not possible to use the internet. Therefore, it is a broad term that includes, but is not limited to, online gaming, gambling, buying, and pornography viewing, as well as social networking and cyberbullying (Fineberg et al., 2018).
The difficulties of defining and operationalizing PIU's phenomenon result in varying prevalence rates (Mihajlov & Vejmelka, 2017). In fact, reported rates among adolescents and college students from China, South Korea, the United States, and European countries vary from 1 to 26% (Rumpf et al., 2014; Wu et al., 2013).
Despite the small number of studies conducted on this topic, they so far coincide in indicating that problematic internet use is closely related to different disorders such as anxiety, depression, and stress (Bernardi & Pallanti, 2009), eating disorders (Hinojo-Lucena et al., 2019), behavioral problems (Ko et al., 2009), attention deficit hyperactivity disorder (Carli et al., 2013) and social phobia (Yen et al., 2007). Depression and anxiety are the disorders most commonly associated with problematic internet use (Li et al., 2017; Malak & Khalifeh, 2018), and people who make excessive use of the internet show ten times more depressive symptoms and seven times more anxiety symptoms than their peers (Andrade et al., 2020).
These data are accompanied by studies in the field of neurosciences which observed that the frequency of internet use is related to a significant decrease in regional gray/white matter volume in areas associated with language processing, memory, attention, and executive functions (Takeuchi et al., 2018). Furthermore, PET (F-fluorodeoxyglucose Positron Emission Tomography) studies showed that the problematic internet use shares psychological and neural mechanisms with drug addiction, due to that PIU alters regions implicated in impulse control and reward processing (Volkow et al., 2003).
Due to the serious consequences that have been observed, clinical interventions for problematic internet use have been designed, implemented, and evaluated. Most of these interventions are based on therapeutic and pharmacological strategies that are commonly used for substance use disorders, obsessive-compulsive disorder, and impulse control disorders (Mihajlov & Vejmelka, 2017). However, while the effectiveness of such interventions seem increasingly well documented, the evidence regarding ways to prevent the problematic internet use and to promote adequate internet use is much less well developed.
Indeed, three systematic reviews that studied the prevention of problematic internet use (Bağatarhan & Siyez, 2015; Throuvala et al., 2019; Vondráčková & Gabrhelík, 2016) agree that the literature on the prevention of such problem is scarce and of limited quality. The absence of consensual diagnostic criteria and methodological limitations in the programs’ designs, as well as the need to implement interventions for at-risk populations and to evaluate the effectiveness of such interventions are primary points of criticism (Saleti et al., 2021).
In addition to the amount of time spent on the internet, another aspect that is having a major impact on users is their excessive use of social networks, which are spaces for self-presenting through user profiles. They contribute to the formation of people’s identity (Orsatti & Riemer, 2015), and therefore young people use social networks to manage their identities, above all when they are undergoing a period of adjustment and identity management (Thomas et al., 2017). In a previous section, we focused on the potential benefits of social networking platforms for mental health, but different studies have shown that their use can also be detrimental to mental health. In this sense, a recent systematic review has concluded that the excessive use of social networks is related to problems such as depression, loneliness, poor sleep quality, self-harming and suicidal thoughts, psychological distress, cyberbullying, dissatisfaction with body image, fear of missing out and reduced life satisfaction (Sadagheyani & Tatari, 2021). Bearing in mind that the use of social networks has also shown positive effects in various studies (Lin et al., 2016; Rosen et al., 2022; Woods & Scott, 2016), it is reasonable to suppose that in addition to the time spent on social networks, the management carried out by users can explain the detriments or benefits that these networks exert on mental health. In this sense, it has been observed that people who have experienced rejection on social media or have felt pressured by social comparison suffer negative effects on their mental health (Rideout & Fox, 2018).
Not only does social media have an impact on mental health, but excessive use of smartphones in general, whether for social media or not, can also have a detrimental effect on our mental health. In fact, a recent study conducted in Canada by Anderl et al. (2023) involving 485 participants, found that individuals who used smartphones the most had lower levels of social well-being, possibly due to a lack of face-to-face interactions. However, the relationship between smartphone use and social well-being is not necessarily causal, as individuals may turn to smartphones when offline interactions are not available. Nonetheless, the authors of the study suggest that when people are absorbed in their smartphones, they are more likely to miss opportunities for personal interactions, as previous research has also demonstrated (Kushlev et al., 2019).
3.2. Dangerous use of the internet, social networking platforms and smartphones
New technologies have brought about new risky behaviors, particularly in adolescents who, in many cases, may not fully comprehend the implications and the potential consequences of their actions when using these tools. Additionally, given that adolescents frequently communicate online, including when it comes to sexual matters, the phenomenon known as sexting, defined by Chalfen (2009) as the exchange of sexually explicit or provocative content (text messages, photos, and videos) via smartphone, internet, or social networks has become a real problem. In fact, scholars increasingly recognize this behavior as a normal part of adolescents’ sexual development (Symons et al., 2018).
This behavior can lead to the nonconsensual dissemination of sexual images, and although little has been studied about the impact of this phenomenon on mental health, media reports have highlighted several cases in which people have attempted or committed suicide after their sexual pictures were shared without their consent (Ankel, 2018; Limón, 2022). Not surprisingly, the few studies carried out on this topic have shown that being a victim of aggravated sexting can have detrimental consequences on one’s mental health, which are similar to those experienced by victims of face-to-face forms of sexual abuse, such as sexual assault and sexual harassment (Bates, 2017; Mandau, 2021). These consequences include high levels of stress, depression, anxiety, post-traumatic stress disorder, somatic symptoms, low self-esteem, self-harm, suicidality, and poor coping strategies, such as substance use (Gassó et al., 2021; Huber, 2022; Patel & Roesch, 2022; Sciacca et al., 2023). Conversely, it has also been found that a higher degree of depressive symptoms predicted a higher degree of sexting behaviors (Gámez-Guadix & De Santisteban, 2018). Thus, the findings suggest a significant association between sexting behaviors and suicidal thoughts, suicide attempts, depressive symptoms, and feelings of sadness.
The prevalence of nonconsensual dissemination of sexual images in adults and young adults with regards to gender differences is not clear. Some studies have shown that this type of victimization is more common among women (Eaton et al., 2017; Karasavva & Forth, 2021), while others have found no significant differences (Henry et al., 2019; Pedersen et al., 2022). However, the available evidence on adolescents shows that boys are more likely to engage in nonconsensual sharing of sexual images compared to girls, and that the images they share are usually of girls (Wachs et al., 2021).
Another widespread behavior that carries significant risks is cyberbullying, a particularly serious form of online aggression directed towards specific individuals, which is perceived to be most harmful when compared with random hostile comments posted online (Hamm et al., 2015). Importantly, as evidenced in a review of 36 studies among children and young people, studies show clear consistency in observing an impact of cyberbullying on mental health in the form of increased depressive symptoms as well as worsening of anxiety symptoms (Hamm et al., 2015). Regarding sex differences, females were twice as likely to be victims of cyberbullying compared with males as reflected in a national survey of adolescents in the USA (Alhajji et al., 2019). Furthermore, although most studies report cross-sectional associations (Hamm et al., 2015), a longitudinal study in Switzerland found that cyberbullying contributed to significantly greater depression over time (Machmutow et al., 2012).
4. Ethics and privacy concerns associated with the use of technology for mental health
As noted in a previous section, there are several risks associated with the privacy when using technology in mental health that need to be addressed in order to protect users. In this section, we will describe those we consider most relevant.
Mittelstadt and Floridi (2016) identify two types of privacy risks associated with invasive monitoring technologies. First, permanent monitoring, surveillance, and data collection in a user´s daily environment can be a significant intrusion into their personal privacy. This can lead to the medicalization of the home environment, and users may feel burdened by constant reminders of their mental health condition (van Genugten et al., 2020). Second, data mining and surveillance technologies can pose risk to user´s data privacy. Big data tools require large data sets, and the sheer scope of data collected can make it difficult for users to oversee how they data are being used. Users may also be unclear about who has access to their data and for what purposes. Compared to traditional face-to-face treatment in an office setting, there are more opportunities for data to leak or to be lost (Luxton et al., 2016). Additionally, the available data comes from various contexts and is more sophisticated, intensifying privacy and data security concerns in a qualitative manner. In fact, keeping privacy of medical data is difficult even in developed countries (Echeverria & Ugalde, 2022). As legislation often lags behind technical developments, loopholes and grey areas may exist that facilitate unethical data use, e.g. commercial use without explicit user consent (Rubeis, 2022).
Other risk associated with the use of technology in mental health is that of reductionism and simplification that can lead to depersonalization. Data mining often requires reduction, simplification, and coding, which can undermine the uniqueness of patient experience (Abbe et al., 2016). Mental health professionals may be forced to translate patient information into pre-configured schemes, which can reduce individual characteristics to standardized categories. This can result in complex relations and health narratives being reduced to quantifiable data points in order to make healthcare interventions more cost-efficient (Dillard-Wright, 2019). As a result, personalization as a main goal of mental health prevention and treatment might be undermined (Rubeis, 2022).
Finally, the risk of committing various biases also arises when using technology on mental health. In fact, this risk is inherent in the mechanisms of data processing, making it harder to detect. It is well-known that several minority groups are underrepresented in large studies, and algorithms may be built mainly on data from majority groups (Carr, 2020; Martinez et al., 2022) and from groups that have access to mental health services generally (Echeverria & Ugalde, 2022). This can lead to discrimination and a widening of the gap in mental health services for different groups of people (Fiske et al., 2019). In this sense, it must be taken into account that not all the population has access to technology, and it need not always be associated with a lack of resources. For example, the elderly are a vulnerable group to being excluded from the digital world. This means that a large percentage of the population may have major limitations when it comes to using the services offered by these technologies (Polat, 2012). Despite the few studies carried out on this subject, five factors have been found that would explain why older people believe they are at greater risk of being digitally excluded compared to the general population (Holgersson & Soderstrom, 2019):
- Fear and anxiety about using technology and digital services: fear of making mistakes and the lack of knowledge and insecurities regarding the use of technologies.
- Negative attitudes towards technology and digital services: lack of interest in technology and the feeling that they "should" use technology and not "want" to use it.
- Feeling "too old" to start studying how to use technology and digital services: older people did not grow up with computers, so there is a generational gap that implies that older people must acquire more knowledge to be able to make full use of technology. In addition, they show greater difficulties in using digital devices because they may have reduced vision or hearing.
- Lack of knowledge and experience to use technology and digital services: many older people have not used technology during their working lives, and this influences their ability to keep up with digital development. In addition, they often feel ashamed because they consider it difficult and for not understanding new technologies, feelings that are exacerbated if they do not have someone to ask.
- Linguistic problems when understanding digital terminology, often in English: the codes used in digital devices are not the usual ones, and many times these are usually in English, which makes them less understandable for elderly people who are not fluent in that language.
In summary, the use of technology in mental health requires careful consideration of ethics and privacy concerns to protect users. The risks of invasive monitoring technologies and data mining and surveillance technologies must be controlled to prevent intrusion into personal privacy and to avoid biases in data collection and processing.
5. Conclusions
Technology has proven to be useful for mental health promotion and treatment. Some technological resources primarily aimed at the users, such as mobile apps and Artificial Intelligence powered chatbots, can help people to monitor and manage different factors associated with their mental health more effectively in their daily lives. Other technological resources aimed at helping the therapist, for instance online therapy, have also allowed for greater accessibility and availability of mental health care services, especially in areas distant from centers where specialized care is provided or in the case of people with difficulties accessing face-to-face therapy. In addition, some of these technologies have provided support to the therapist to follow the patient in real-time. However, they also show some problems related, among other issues, to the type of link that the user can establish with a technological device, and to the difficulty some people have in accessing technology. In this sense, these groups that do not have access to new technologies, in addition to being harmed by not being able to benefit from their use, can also develop a feeling of discrimination that can considerably influence their mental health.
Furthermore, although we have not specifically addressed this topic in any of the sections of the article, we believe that the effectiveness of these technological resources depends to some extent on collaboration between mental health professionals and technology professionals (e.g. engineers or computer scientists). While this collaboration is common in academia, it is less common in the professional field. On the other hand, being a very recent reality, systematic reviews and meta-analyses evaluating the effectiveness of some of these technologies do not obtain conclusive results.
It is also important to recognize the negative aspects and ethical concerns that accompany mental health technology. In fact, the excessive and dangerous use of technologies that are widespread in society, such as the internet or social networks, can lead to serious mental problems. In addition, the privacy and security of patient data is critical, and appropriate regulations and security measures need to be put in place to protect personal information. On the other hand, automation and digitization may reduce personalized care and negatively affect the quality of care.
We believe that the solutions to the problems derived from the misuse of technologies should be addressed collectively, because the needs they have created in society and the challenges they confront us with are very difficult to solve individually.
In summary, technology has the potential to be a valuable tool for mental health, but is important to address and resolve the negative aspects and ethical issues associated with its use. Striking a balance between technological innovation and patient-centered mental health is critical to providing high-quality, effective care. Finally, it should be noted that for this last objective to be extended to all citizens, it is necessary to greatly increase the number of professionals (psychiatrists and psychologists) dedicated to mental health in public health centers.
6. References
Abbe, A., Grouin, C., Zweigenbaum, P., & Falissard, B. (2016). Text mining applications in psychiatry: a systematic literature review. International journal of methods in psychiatric research, 25(2), 86-100.
Abd-Alrazaq, A. A., Alajlani, M., Alalwan, A. A., Bewick, B. M., Gardner, P., & Househ, M. (2019). An overview of the features of chatbots in mental health: A scoping review. International Journal of Medical Informatics, 132, 103978.
Abd-Alrazaq, A. A., Rababeh, A., Alajlani, M., Bewick, B. M., & Househ, M. (2020). Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. Journal of medical Internet research, 22(7), e16021.
Aitken, M., & Nass, D. (2021). Digital health trends 2021: innovation, evidence, regulation, and adoption. Slideshare [accessed 2022-06-08].
Alhajji,M., Bass, S., & Dai, T. (2019). Cyberbullying, mental health, and violence in adolescents and associations with sex and race: data from the 2015 youth risk behavior survey. Global Pediatric Health, 6, 1-9.
Anderl, C., Hofer, M. K., & Chen, F. S. (2023). Directly-measured smartphone screen time predicts well-being and feelings of social connectedness. Journal of Social and Personal Relationships, 0(0).
Andrade, A. L. M., Scatena, A., Bedendo, A., Enumo, S. R. F., Dellazzana‐Zanon, L. L., Prebianchi, H. B., Machado, W. L., & de Micheli, D. (2020). Findings on the relationship between Internet addiction and psychological symptoms in Brazilian adults. International Journal of Psychology, 55(6), 941-950.
Ankel, S. (2018). Many revenge porn victims consider suicide–why aren’t schools doing more to stop it? The Guardian 7.
Bağatarhan, T., & Siyez, D. M. (2015). Programs for preventing Internet addiction during adolescence: A systematic review. Addicta: The Turkish Journal on Addictions, 4(2), 243–265.
Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42.
Baus, O., & Bouchard, S. (2014). Moving from virtual reality exposure-based therapy to augmented reality exposure-based therapy: a review. Frontiers in human neuroscience, 8, 112.
Bernardi, S., & Pallanti, S. (2009). Internet addiction: a descriptive clinical study focusing on comorbidities and dissociative symptoms. Comprehensive psychiatry, 50(6), 510-516.
Bickmore, T. W., Puskar, K., Schlenk, E. A., Pfeifer, L. M., & Sereika, S. M. (2010). Maintaining reality: Relational agents for antipsychotic medication adherence. Interacting with Computers, 22(4), 276-288.
Billieux, J., Schimmenti, A., Khazaal, Y., Maurage, P., & Heeren, A. (2015). Are we overpathologizing everyday life? A tenable blueprint for behavioral addiction research. Journal of behavioral addictions, 4(3), 119-123.
Botella, C., Perpiña, C., Baños, R. M., & Garcíu-Palacios, A. (1998). Virtual reality: a new clinical setting lab. Virtual environments in clinical psychology and neuroscience, 58, 73-81.
Botella, C., Baños, R. M., García-Palacios, A., & Quero, S. (2017). Virtual reality and other realities. In The science of cognitive behavioral therapy (pp. 551-590). Academic Press.
Botella, C., Díaz-García, A., Baños, R., & Quero, S. (2009). Cybertherapy: Advantages, limitations, and ethical issues. PsychNology Journal, 7(1), 77-100.
Boyle, R., & Staubli, N. (2015, February 25). Facebook Safety.
Burke, M., Marlow, C., & Lento, T. (2010, April). Social network activity and social well-being. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1909-1912).
Carli, V., Durkee, T., Wasserman, D., Hadlaczky, G., Despalins, R., Kramarz, E., Wasserman, C., Sarchiapone, M., Hoven, C. W., Brunner, R., & Kaess, M. (2013). The association between pathological internet use and comorbid psychopathology: a systematic review. Psychopathology, 46(1), 1-13.
Carr, S. (2020). ‘AI gone mental’: engagement and ethics in data-driven technology for mental health. Journal of Mental Health, 29(2), 125-130.
Chalfen, R. (2009). ‘It’s only a picture’: Sexting, ‘smutty’ snapshots and felony charges. Visual Studies, 24(3), 258-268.
Chisholm, D., Sweeny, K., Sheehan, P., Rasmussen, B., Smit, F., Cuijpers, P., & Saxena, S. (2016). Scaling-up treatment of depression and anxiety: a global return on investment analysis. The Lancet Psychiatry, 3(5), 415-424.
Choi, H. K., & Lee, S. H. (2021). Trends and effectiveness of ICT interventions for the elderly to reduce loneliness: A systematic review. Healthcare, 9(3), 293. MDPI.
Chung, C., & Pennebaker, J. W. (2007). The psychological functions of function words. Social communication, 1, 343-359.
de Oliveira Christoff, A., & Boerngen-Lacerda, R. (2015). Reducing substance involvement in college students: a three-arm parallel-group randomized controlled trial of a computer-based intervention. Addictive behaviors, 45, 164-171.
D’Hondt, F., Billieux, J., & Maurage, P. (2015). Electrophysiological correlates of problematic Internet use: Critical review and perspectives for future research. Neuroscience & Biobehavioral Reviews, 59, 64–82.
Douthit, N., Kiv, S., Dwolatzky, T., & Biswas, S. (2015). Exposing some important barriers to health care access in the rural USA. Public health, 129(6), 611-620.
Eaton, A. A., Jacobs, H., & Ruvalcaba, J. (2017). 2017 Nationwide online study of nonconsensual porn victimization and perpetration a summary report (pp. 1–28). Cyber Civil Rights Initiative.
Ebert, D. D., Van Daele, T., Nordgreen, T., Karekla, M., Compare, A., Zarbo, C., Brugnera, A., Oeverland, S., Baumeister, H., & Taylor, J. (2018). Internet-and mobile-based psychological interventions: applications, efficacy, and potential for improving mental health. European Psychologist.
Echeverria, J., & Ugalde, J. M. (2022). Introduction: Artificial Intelligence its Potential and Limits. Revista Internacional de los Estudios Vascos= Eusko Ikaskuntzen Nazioarteko Aldizkaria=Revue Internationale des Études Basques=International Journal on Basque Studies, RIEV, 67(2), 1-10.
Fineberg, N., Demetrovics, Z., Stein, D. J., Ioannidis, K., Potenza, M. N., Grünblatt, E., Brand, M., Billieux, J., Carmi, L., King, D. L., Grant, J. E., Yücel, M., Dell’Osso, B., Rumpf, H. J., Hall, N., Hollander, E., Goudriaan, A., Menchon, J., Zohar, J., . . . Chamberlain, S. (2018). Manifesto for a European research network into problematic usage of the Internet. European Neuropsychopharmacology, 28(11), 1232–1246.
Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of medical Internet research, 21(5), e13216.
Forte, A., Sarli, G., Polidori, L., Lester, D., & Pompili, M. (2021). The role of new technologies to prevent suicide in adolescence: a systematic review of the literature. Medicina, 57(2), 109.
France, D. J., Shiavi, R. G., Silverman, S., Silverman, M., & Wilkes, M. (2000). Acoustical properties of speech as indicators of depression and suicidal risk. IEEE transactions on Biomedical Engineering, 47(7), 829-837.
Fu, Z., Burger, H., Arjadi, R., & Bockting, C. L. (2020). Effectiveness of digital psychological interventions for mental health problems in low-income and middle-income countries: a systematic review and meta-analysis. The Lancet Psychiatry, 7(10), 851-864.
Gámez-Guadix, M., & De Santisteban, P. (2018). “Sex Pics?”: Longitudinal predictors of sexting among adolescents. Journal of Adolescent Health, 63(5), 608-614.
Gassó, A. M., Mueller-Johnson, K., & Gómez-Durán, E. L. (2021). Victimization as a result of non-consensual dissemination of sexting and psychopathology correlates: An exploratory analysis. International Journal of Environmental Research and Public Health, 18(12), 6564.
Hamm, M. P., Newton, A. S., Chisholm, A., Shulhan, J., Milne, A., Sundar, P., Ennis, H., Scott, S. D., & Hartling, L. (2015). Prevalence and effect of cyberbullying on children and young people: a scoping review of social media studies. JAMA Pediatrics, 169(8), 770–777.
Hashim, N. W., Wilkes, M., Salomon, R., & Meggs, J. (2012). Analysis of timing pattern of speech as possible indicator for near-term suicidal risk and depression in male patients. International Proceedings of Computer Science and Information Technology, 58, 6.
Henry, N., Flynn, A., & Powell, A. (2019). Trends & Issues in Crime & Criminal Justice: Image-based Sexual Abuse: Victims and Perpetrators. Criminology Research Council, (572).
Hertlein, K. M., Blumer, M. L. C., & Mihaloliakos, J. H. (2015). Marriage and family counselors’ perceived ethical issues related to online therapy. The Family Journal, 23(1), 5–12.
Hinojo-Lucena, F. J., Aznar-Díaz, I., Cáceres-Reche, M. P., Trujillo-Torres, J. M., & Romero-Rodríguez, J. M. (2019). Problematic internet use as a predictor of eating disorders in students: a systematic review and meta-analysis study. Nutrients, 11(9), 2151.
Holgersson, J., & Söderström, E. (2019). Bridging the gap: Exploring elderly citizens' perceptions of digital exclusion. In 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, June 8-14, 2019. Association for Information Systems.
Huber, A. (2022). ‘A shadow of me old self’: The impact of image-based sexual abuse in a digital society. International Review of Victimology, 29(2), 199–216.
Karasavva, V., & Forth, A. (2021). Personality, attitudinal, and demographic predictors of non-consensual dissemination of intimate images. Journal of Interpersonal Violence, 37(21–22), 19265–19289.
Ko, C. H., Yen, J. Y., Chen, C. S., Yeh, Y. C., & Yen, C. F. (2009). Predictive values of psychiatric symptoms for internet addiction in adolescents: a 2-year prospective study. Archives of pediatrics & adolescent medicine, 163(10), 937-943.
Kushlev, K., Dwyer, R., & Dunn, E. W. (2019). The social price of constant connectivity: Smartphones impose subtle costs on well-being. Current Directions in Psychological Science, 28(4), 347–352.
Larsen, M. E., Boonstra, T. W., Batterham, P. J., O’Dea, B., Paris, C., & Christensen, H. (2015). We feel: mapping emotion on Twitter. IEEE journal of biomedical and health informatics, 19(4), 1246-1252.
Li, J. B., Lau, J., Mo, P., Su, X. F., Tang, J., Qin, Z. G., & Gross, D. L. (2017). Insomnia partially mediated the association between problematic Internet use and depression among secondary school students in China. Journal of Behavioral Addictions, 6(4), 554–563.
Limón, R. (2022). Imágenes sexuales que destruyen y matan. El País.
Lin, L.Y., Sidani, J.E., Shensa, A., Radovic, A., Miller, E., Colditz, J.B., Hoffman, B.L., Giles, L.M. & Primack, B.A. (2016). Association between social media use and depression among US young adults. Depression and anxiety, 33(4), 323-331.
Liu, L., Triscott, J., Dobbs, B., Strain, L., Burwash, S., Cleary, S., Hopper, T., & Warren, S. (2006). Distance delivery of geriatric consultation to family physicians in rural Alberta: preliminary results. In Telehealth: Proceedings of the Second IASTED International Conference.
Luxton, D. D., Anderson, S. L., & Anderson, M. (2016). Ethical issues and artificial intelligence technologies in behavioral and mental health care. In Artificial intelligence in behavioral and mental health care (pp. 255-276). Academic Press.
Machmutow, K., Perren, S., Sticca, F., & Alsaker, F. D. (2012). Peer victimisation and depressive symptoms: can specific coping strategies buffer the negative impact of cybervictimisation?. Emotional and Behavioural Difficulties, 17(3–4), 403–420.
Malak, M. Z., & Khalifeh, A. H. (2018). Anxiety and depression among school students in Jordan: Prevalence, risk factors, and predictors. Perspectives in Psychiatric Care, 54(2), 242–250.
Mandau, M. B. H. (2021). “Snaps”, “screenshots”, and self-blame: A qualitative study of image-based sexual abuse victimization among adolescent Danish girls. Journal of Children and Media, 15(3), 431–447.
Mann, J. J., Apter, A., Bertolote, J., Beautrais, A., Currier, D., Haas, A., Hegerl, U., Lonnqvist, J., Malone, K., Marusic, A., Mehlum, L., Patton, G., Phillips, M., Rutz, W., Rihmer, Z., Schmidtke, A., Shaffer, D., Silverman, M, ... & Hendin, H. (2005). Suicide prevention strategies: a systematic review. Jama, 294(16), 2064-2074.
Marcu, G., Ondersma, S. J., Spiller, A. N., Broderick, B. M., Kadri, R., & Buis, L. R. (2022). The perceived benefits of digital interventions for behavioral health: qualitative interview study. Journal of Medical Internet Research, 24(3), e34300.
Martinengo, L., Lum, E., & Car, J. (2022). Evaluation of chatbot-delivered interventions for self-management of depression: Content analysis. Journal of affective disorders, 319, 598-607.
Martínez, N., Agudo, U., & Matute, H. (2022). Human cognitive biases present in Artificial Intelligence. Revista Internacional de los Estudios Vascos=Eusko Ikaskuntzen Nazioarteko Aldizkaria=Revue Internationale des Ètudes Basques=International Journal on Basque Studies, RIEV, 67(2), 51-60.
Mendes, J. P., Moura, I. R., Van de Ven, P., Viana, D., Silva, F. J., Coutinho, L. R., Teixeira, S., Rodrigues, J., & Teles, A. S. (2022). Sensing apps and public data sets for digital phenotyping of mental health: Systematic review. Journal of medical Internet research, 24(2), e28735.
Mihajlov, M., & Vejmelka, L. (2017). Internet addiction: A review of the first twenty years. Psychiatria Danubina, 29(3), 260–272.
Mittelstadt, B. D., & Floridi, L. (2016). The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. In: Mittelstadt, B.D., & Floridi, L. / P. Casanovas, & G. Sartor (Eds). The Ethics of Biomedical Big Data. Law, Governance and Technology Series, 29 (pp. 445-480). Springer, Cham.
Mowery, D., Smith, H., Cheney, T., Stoddard, G., Coppersmith, G., Bryan, C., & Conway, M. (2017). Understanding depressive symptoms and psychosocial stressors on Twitter: a corpus-based study. Journal of medical Internet research, 19(2), e6895.
Nutting, P. A., Dickinson, L. M., Rubenstein, L. V., Keeley, R. D., Smith, J. L., & Elliott, C. E. (2005). Improving detection of suicidal ideation among depressed patients in primary care. The Annals of Family Medicine, 3(6), 529-536.
O'Dea, B., Wan, S., Batterham, P. J., Calear, A. L., Paris, C., & Christensen, H. (2015). Detecting suicidality on Twitter. Internet Interventions, 2(2), 183-188.
Oh, J., Jang, S., Kim, H., & Kim, J. J. (2020). Efficacy of mobile app-based interactive cognitive behavioral therapy using a chatbot for panic disorder. International journal of medical informatics, 140, 104171.
Ojagbemi, A., Oladeji, B., Abiona, T., & Gureje, O. (2013). Suicidal behaviour in old age-results from the Ibadan Study of Ageing. BMC psychiatry, 13(1), 1-7.
Orsatti, J., & Riemer, K. (2015, May). Identity-making: A Multimodal Approach for Researching Identity in Social Media. In Ecis.
Ozdas, A., Shiavi, R. G., Silverman, S. E., Silverman, M. K., & Wilkes, D. M. (2004). Investigation of vocal jitter and glottal flow spectrum as possible cues for depression and near-term suicidal risk. IEEE transactions on Biomedical engineering, 51(9), 1530-1540.
Patel, U., & Roesch, R. (2022). The prevalence of technology-facilitated sexual violence: A meta-analysis and systematic review. Trauma, Violence, and Abuse, 23(2), 428–443.
Pedersen, W., Bakken, A., Stefansen, K., & von Soest, T. (2022). Sexual victimization in the digital age: A population-based study of physical and image-based sexual abuse among adolescents. Archives of Sexual Behavior, 52, 399–410.
Perle, J. G., Langsam, L. C., & Nierenberg, B. (2011). Controversy clarified: An updated review of clinical psychology and tele-health. Clinical psychology review, 31(8), 1247–1258.
Polat, R. K. (2012). Digital exclusion in Turkey: A policy perspective. Government information quarterly, 29(4), 589-596.
Pratt, D. R., Zyda, M., & Kelleher, K. (1995). Virtual reality: in the mind of the beholder. Computer, 28(07), 17-19.
Preoţiuc-Pietro, D., Eichstaedt, J., Park, G., Sap, M., Smith, L., Tobolsky, V., Schwartz, H. A. & Ungar, L. (2015). The role of personality, age, and gender in tweeting about mental illness. In Proceedings of the 2nd workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality (pp. 21-30).
Ramos-Ríos, R., Mateos, R., Lojo, D., Conn, D. K., & Patterson, T. (2012). Telepsychogeriatrics: a new horizon in the care of mental health problems in the elderly. International Psychogeriatrics, 24(11), 1708-1724.
Rideout, V., & Fox, S. (2018). Digital health practices, social media use, and mental well-being among teens and young adults in the U.S. San Francisco, CA.
Rosen, A. O., Holmes, A. L., Balluerka, N., Hidalgo, M. D., Gorostiaga, A., Gómez-Benito, J., & Huedo-Medina, T. B. (2022). Is social media a new type of social support? social media use in Spain during the COVID-19 pandemic: A mixed methods study. International Journal of Environmental Research and Public Health, 19(7), 3952.
Rubeis, G. (2022). iHealth: The ethics of artificial intelligence and big data in mental healthcare. Internet Interventions, 28, 100518.
Rubin, P. (2016, October 7). Mark Zuckerberg's VR Selfie Is a Bigger Deal Than You Realize. WIRED.
Rumpf, H.-J., Vermulst, A. A., Bischof, A., Kastirke, N., Gürtler, D., Bischof, G., Meerkerk, G.-J., John, U., & Meyer, C. (2014). Occurence of Internet addiction in a general population sample: A latent class analysis. European Addiction Research, 20(4), 159–166.
Sadagheyani, H. E., & Tatari, F. (2021). Investigating the role of social media on mental health. Mental Health and Social Inclusion, 25(1), 41-51.
Saletti, S. M. R., Van den Broucke, S., & Chau, C. (2021). The effectiveness of prevention programs for problematic Internet use in adolescents and youths: A systematic review and meta-analysis. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 15(2).
Samaritans. (2014, October 29). Samaritans Launches Twitter App To Help Identify Vulnerable People.
Scherer, S., Pestian, J., & Morency, L. P. (2013, May). Investigating the speech characteristics of suicidal adolescents. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 709-713). IEEE.
Scholten, M. R., Kelders, S. M., & Van Gemert-Pijnen, J. E. (2017). Self-guided web-based interventions: scoping review on user needs and the potential of embodied conversational agents to address them. Journal of medical Internet research, 19(11), e383.
Sciacca, B., Mazzone, A., Loftsson, M., O’Higgins Norman, J., & Foody, M. (2023). Nonconsensual dissemination of sexual images among adolescents: associations with depression and self-esteem. Journal of interpersonal violence, 08862605231165777.
Snowdon, J. (2010). Mental health service delivery in long-term care homes. International Psychogeriatrics, 22, 1063–1071.
Spada, M. M. (2014). An overview of problematic Internet use. Addictive Behaviors, 39(1), 3–6.
Starcevic, V. (2013). Is Internet addiction a useful concept?. Australian & New Zealand Journal of Psychiatry, 47(1), 16–19.
Symons, K., Ponnet, K., Walrave, M., & Heirman, W. (2018). Sexting scripts in adolescent relationships: Is sexting becoming the norm?. New Media and Society, 20(10), 3836–3857.
Takeuchi, H., Taki, Y., Asano, K., Asano, M., Sassa, Y., Yokota, S., Kotozaki, Y., Nouchi, R., & Kawashima, R. (2018). Impact of frequency of internet use on development of brain structures and verbal intelligence: Longitudinal analyses. Human brain mapping, 39(11), 4471-4479.
Teles, A., Barros, F., Rodrigues, I., Barbosa, A., Silva, F., Coutinho, L., & Teixeira, S. (2020). Internet of things applied to mental health: Concepts, applications, and perspectives. IoT and ICT for Healthcare Applications, 33-58.
The Lancet & Child Adolescent Health. (2018). Growing up in a digital world: Benefits and risks. The Lancet Child and Adolescent Health, 2(2), 79.
Thomas, L., Briggs, P., Hart, A., & Kerrigan, F. (2017). Understanding social media and identity work in young people transitioning to university. Computers in Human Behavior, 76, 541-553.
Throuvala, M. A., Griffiths, M. D., Rennoldson, M., & Kuss, D. J. (2019). School-based prevention for adolescent Internet addiction: Prevention is the key. A systematic literature review. Current Neuropharmacology, 17(6), 507– 525.
Torous, J., Kiang, M. V., Lorme, J., & Onnela, J. P. (2016). New tools for new research in psychiatry: a scalable and customizable platform to empower data driven smartphone research. JMIR mental health, 3(2), e5165.
Tsai, C. W., Lai, C. F., Chao, H. C., & Vasilakos, A. V. (2015). Big data analytics: a survey. Journal of Big data, 2(1), 1-32.
Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and conversational agents in mental health: a review of the psychiatric landscape. The Canadian Journal of Psychiatry, 64(7), 456-464.
van Genugten, C. R., Schuurmans, J., Lamers, F., Riese, H., Penninx, B. W., Schoevers, R. A., Riper, H.M., & Smit, J. H. (2020). Experienced burden of and adherence to smartphone-based ecological momentary assessment in persons with affective disorders. Journal of clinical medicine, 9(2), 322.
Van Orden, K. A., Witte, T. K., Cukrowicz, K. C., Braithwaite, S. R., Selby, E. A., & Joiner Jr, T. E. (2010). The interpersonal theory of suicide. Psychological review, 117(2), 575.
Ventura, S., Baños, R. M., & Botella, C. (2018). Virtual and Augmented Reality: New Frontiers for Clinical Psychology. InTech.
Volkow, N. D., Fowler, J. S., & Wang, G. J. (2003). The addicted human brain: insights from imaging studies. The Journal of clinical investigation, 111(10), 1444-1451.
Volpe, U., Dell’Osso, B., Fiorillo, A., Mucic, D., & Aboujaoude, E. (2015). Internet-related psychopathology: Clinical phenotypes and perspectives in an evolving field. Journal of Psychopathology, 21(4), 406–414.
Vondráčková, P., & Gabrhelík, R. (2016). Prevention of Internet addiction: A systematic review. Journal of Behavioral Addictions, 5(4), 568–579.
Wachs, S., Wright, M. F., Gámez-Guadix, M., & Dӧring, N. (2021). How are consensual, non-consensual, and pressured sexting linked to depression, and self-harm? The moderating effects of demographic variables. Environmental Research & Public Health, 18, 2597–2613.
Weisel, K. K., Fuhrmann, L. M., Berking, M., Baumeister, H., Cuijpers, P., & Ebert, D. D. (2019). Standalone smartphone apps for mental health—a systematic review and meta-analysis. NPJ digital medicine, 2(1), 118.
Woods, H. C., & Scott, H. (2016). # Sleepyteens: Social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self-esteem. Journal of adolescence, 51, 41-49.
World Health Organization. (2021, June 16). Suicide worldwide in 2019: global health estimates.
World Health Organization. (2022, June 17). WHO highlights urgent need to transform mental health and mental health care.
Wu, X., Chen, X., Han, J., Meng, H., Luo, J., Nydegger, L., & Wu, H. (2013). Prevalence and factors of addictive Internet use among adolescents in Wuhan, China: Interactions of parental relationship with age and hyperactivity-impulsivity. PLoS One, 8(4), e61782.
Yen, J. Y., Ko, C. H., Yen, C. F., Wu, H. Y., & Yang, M. J. (2007). The comorbid psychiatric symptoms of Internet addiction: attention deficit and hyperactivity disorder (ADHD), depression, social phobia, and hostility. Journal of adolescent health, 41(1), 93-98.
Yingthawornsuk, T., Keskinpala, H. K., Wilkes, D. M., Shiavi, R. G., & Salomon, R. M. (2007). Direct acoustic feature using iterative EM algorithm and spectral energy for classifying suicidal speech. In Eighth Annual Conference of the International Speech Communication Association.
PARTEKATU